Why corporate penny-shaving backfires. (Also, how to do a layoff right.)

One of the clearest signs of corporate decline (2010s Corporate America is like 1980s Soviet Russia, in terms of its low morale and lethal overextension) is the number of “innovations” that are just mean-spirited, and seem like prudent cost-cutting but actually do minimal good (and, often, much harm) to the business.

One of these is the practice of pooling vacation and sick leave in a single bucket, “PTO”. Ideally, companies shouldn’t limit vacation or sick time at all– but my experience has shown “unlimited vacation” to correlate with a negative culture. (If I ran a company, it would institute a mandatory vacation policy: four weeks minimum, at least two of those contiguous.) Vacation guidelines need to be set for the same reason that speed limits (even if intentionally under-posted, with moderate violation in mind) need to be there; without them, speed variance would be higher on both ends. So, I’ve accepted the need for vacation “limits”, at least as soft policies; but employers expect their people to either use a vacation day for sick leave, or come into the office while sick, are just being fucking assholes.

These PTO policies are, in my view, reckless and irresponsible. They represent a gamble with employee health that I (as a person with a manageable but irritating disability) find morally repugnant. It’s bad enough to deny rest to someone just because a useless bean-counter wants to save the few hundred dollars paid out for unused vacation when someone leaves the company. But by encouraging the entire workforce to show up while sick and contagious, they subject the otherwise healthy to an unnecessary germ load. Companies with these pooled leave, “PTO”, policies end up with an incredibly sickly workforce. One cold just rolls right into another, and the entire month of February is a haze of snot, coughing, and bad code being committed because half the people at any given time are hopped up on cold meds and really ought to be in bed. It’s not supposed to be this way. This will shock those who suffer in open-plan offices, but an average adult is only supposed to get 2-3 colds per year, not the 4-5 that are normal in an open-plan office (another mean-spirited tech-company “innovation”) or the 7-10 per year that is typical in pooled-leave companies.

The math shows that PTO policies are a raw deal even for the employer. In a decently-run company with an honor-system sick leave policy, an average healthy adult might have to take 5 days off due to illness per year. (I miss, despite my health problems, fewer than that.) Under PTO, people push themselves to come in and only stay home if they’re really sick. Let’s say that they’re now getting 8 colds per year instead of the average 2. (That’s not an unreasonable assumption, for a PTO shop.) Only 2 or 3 days are called-off, but there are a good 24-32 days in which the employee is functioning below 50 percent efficiency. Then there are the morale issues, and the general perception that employees will form of the company as a sickly, lethargic place; and the (mostly unintentional) collective discovery of how low a level of performance will be tolerated. January’s no longer about skiing on the weekends and making big plans and enjoying the long golden hour… while working hard, because one is refreshed. It’s the new August; fucking nothing gets done because even though everyone’s in the office, they’re all fucking sick with that one-rolls-into-another months-long cold. That’s what PTO policies bring: a polar vortex of sick.

Why, if they’re so awful, do companies use them? Because HR departments often justify their existence by externalizing costs elsewhere in the company, and claiming they saved money. So-called “performance improvement plans” (PIPs) are a prime example of this. The purpose of the PIP is not to improve the employee. Saving the employee would require humiliating the manager, and very few people have the courage to break rank like that. Once the PIP is written, the employee’s reputation is ruined, making mobility or promotion impossible. The employee is stuck in a war with his manager (and, possibly, team) that he will almost certainly lose, but he can make others lose along the way. To the company, a four-month severance package is far cheaper than the risk that comes along with having a “walking dead” employee, pissing all over morale and possibly sabotaging the business, in the office for a month. So why do PIPs, which don’t even work for their designed intention (legal risk mitigation) unless designed and implemented by extremely astute legal counsel, remain common? Well, PIPs a loss to the company, even compared to “gold-plated” severance plans. We’ve established that. But they allow the HR department to claim that it “saved money” on severance payments (a relatively small operational cost, except when top executives are involved) while the costs are externalized to the manager and team that must deal with a now-toxic (and if already toxic before the PIP, now overtly destructive) employee. PTO policies work the same way. The office becomes lethargic, miserable, and sickly, but HR can point to the few hundred dollars saved on vacation payouts and call it a win.

On that, it’s worth noting that these pooled-leave policies aren’t actually about sick employees. People between the ages of 25 and 50 don’t get sick that often, and companies don’t care about that small loss. However, their children, and their parents, are more likely to get sick. PTO policies aren’t put in place to punish young people for getting colds. They’re there to deter people with kids, people with chronic health problems, and people with sick parents from taking the job. Like open-plan offices and the anxiety-inducing micromanagement often given the name of “Agile”, it’s back-door age and disability discrimination. The company that institutes a PTO policy doesn’t care about a stray cold; but it doesn’t want to hire someone with a special-needs child. Even if the latter is an absolute rock star, the HR department can justify itself by saying it helped the company dodge a bullet.

Let’s talk about cost cutting more generally, because I’m smarter than 99.99% of the fuckers who run companies in this world and I have something important to say.

Companies don’t fail because they spend too much money. “It ran out of money” is the proximate cause, not the ultimate one. Some fail when they cease to excel and inspire (but others continue beyond that point). Some fail, when they are small, because of bad luck. Mostly, though, they fail because of complexity: rules that don’t make sense and block useful work from being done, power relationships that turn toxic and, yes, recurring commitments and expenses that can’t be afforded (and must be cut). Cutting complexity rather than cost should be the end goal, however. I like to live with few possessions not because I can’t afford to spend the money (I can) but because I don’t want to deal with the complexity that they will inject into my life. It’s the same with business. Uncontrolled complexity will cause uncontrolled costs and ultimately bring about a company’s demise. What does this mean about cutting costs, which MBAs love to do? Sometimes it’s great to cut costs. Who doesn’t like cutting “waste”? The problem there is that there actually isn’t much obvious waste to be cut, so after that, one has to focus and decide on which elements of complexity are unneeded, with the understanding that, yes, some people will be hurt and upset. Do we need to compete in 25 businesses, when we’re only viable in two? This will also cut costs (and, sadly, often jobs).

The problem, see, is that most of the corporate penny-shaving increases complexity. A few dollars are saved, but at the cost of irritation and lethargy and confusion. People waste time working around new rules intended to save trivial amounts of money. The worst is when a company cuts staff but refuses to reduce its internal complexity. This requires a smaller team to do more work– often, unfamiliar work that they’re not especially good at or keen on doing; people were well-matched to tasks before the shuffle, but that balance has gone away. The career incoherencies and personality conflicts that emerge are… one form of complexity.

The problem is that most corporate executives are “seagull bosses” (swoop, poop, and fly away) who see their companies and jobs in a simple way: cut costs. (Increasing revenue is also a strategy, but that’s really hard in comparison.) A year later, the company is still failing not because it failed to cut enough costs or people, but because it never did anything about the junk complexity that was destroying it in the first place.

Let’s talk about layoffs. The growth of complexity is often exponential, and firms inevitably get to a place where they are too complex (and, a symptom of this is that operations are too expensive) to survive. The result is that it needs to lay people off. Now, layoffs suck. They really fucking do. But there’s a right way and a wrong way to execute one. To do a layoff right, the company needs to cut complexity and cut people. (Otherwise, it will have more complexity per capita, the best people will get fed up and leave, and the death spiral begins.) It also needs to cut the right complexity; all the stuff that isn’t useful.

Ideally, the cutting of people and cutting of complexity would be tied together. Unnecessary business units being cut usually means that people staffed on them are the ones let go. The problem is that that’s not very fair, because it means that good people, who just happened to be in the wrong place, will lose their jobs. (I’d argue that one should solve this by offering generous severance, but we already know why that isn’t a popular option, though it should be.) The result is that when people see their business area coming into question, they get political. Of course this software company needs a basket-weaving division! In-fighting begins. Tempers flare. From the top, the water gets very muddy and it’s impossible to see what the company really looks like, because everyone’s feeding biased information to the executives. (I’m assuming that the executive who must implement the cuts is acting in good faith, which is not always true.) What this means is that the crucial decision– what business complexity are we going to do without?– can’t be subject to a discussion. Debate won’t work. It will just get word out that job cuts are coming, and political behavior will result. The horrible, iron fact is that this calls for temporary autocracy. The leader must make that call in one fell swoop. No second guessing, no looking back. This is the change we need to make in order to survive. Good people will be let go, and it really sucks. However, seeing as it’s impossible to execute a large-scale layoff without getting rid of some good people, I think the adult thing to do is write generous severance packages.

Cutting complexity is hard. It requires a lot of thought. Given that the information must be gathered by the chief executive without tipping anyone off, and that complex organisms are (by definition) hard to factor, it’s really hard to get the cuts right. Since the decision must be made on imperfect information, it’s a given that it usually won’t be the optimal cut. It just has to be good enough (that is, removing enough complexity with minimal harm to revenue or operations) that the company is in better health.

Cutting people, on the other hand, is much easier. You just tell them that they don’t have jobs anymore. Some don’t deserve it, some cry, some sue, and some blog about it but, on the whole, it’s not actually the hard part of the job. This provides, as an appealing but destructive option, the lazy layoff. In a lazy layoff, the business cuts people but doesn’t cut complexity. It just expects more work from everyone. All departments lose a few people! All “survivors” now have to do the work of their fallen brethren! The too-much-complexity problem, the issue that got us to the layoff in the first place… will figure itself out. (It never does.)

Stack ranking is a magical, horrible solution to the problem. What if one could do a lazy layoff but always cull the “worst” people? After all, some people are of negative value, especially considering the complexity load (in personality conflicts, shoddy work) they induce. The miracle of stack ranking is that it turns a layoff– otherwise, a hard decision guaranteed to put some good people out of work– into an SQL query. SELECT name FROM Employee WHERE perf <= 3.2. Since the soothsaying of stack ranking has already declared the people let-go as bottom-X-percent performers, there’s no remorse in culling them. They were dead weight”. Over time, stack ranking evolves into a rolling, continuous lazy layoff that happens periodically (“rank-and-yank”).

It’s also dishonest. There are an ungodly number of large technology companies (over 1,000) that claim to have “never had a layoff”. That just isn’t fucking true. Even if the CEO was Jesus Christ himself, he’d have to lay people off because that’s just how business works. Tech-company sleazes just refuse to use the word “layoff”, for fear of losing their “always expanding, always looking for the best talent!” image. So they call it a “low performer initiative” (stack ranking, PIPs, eventual firings). What a “low-performer initiative” (or stack ranking, which is a chronic LPI) inevitably devolves into is a witch hunt that turns the organization into pure House of Cards politics. Yes, most companies have about 10 percent who are incompetent or toxic or terminally mediocre and should be sent out the door. Figuring which 10 percent those people are, is not easy. People who are truly toxic generally have several years’ worth of experience drawing a salary without doing anything, and that’s a skill that improves with time. They’re really good at sucking (and not getting caught). They’re adept political players. They’ve had to be; the alternative would have been to have grown a work ethic. Most of what we as humans define as social acceptability is our ethical immune system, which can catch and punish the small-fry offenders but can’t do a thing about the cancer cells (psychopaths, parasites) that have evolved to the point of being able to evade or even redirect that rejection impulse. The question of how to get that toxic 10 percent out is an unsolved one, and I don’t have space to tackle it now, but the answer is definitely not stack ranking, which will always clobber several unlucky good-faith employees for every genuine problem employee it roots out.

Moreover, stack ranking has negative permanent effects. Even when not tied to a hard firing percentage, its major business purpose is still to identify the bottom X percent, should a lazy layoff be needed. It’s a reasonable bet that unless things really go to shit, X will be 5 or 10 or maybe 20– but not 50. So stack ranking is really about the bottom. The difference between the 25th percentile and 95th percentile, in stack ranking, really shouldn’t matter. Don’t get me wrong: a 95th-percentile worker is often highly valuable and should be rewarded. I just don’t have any faith in the ability of stack ranking to detect her, just as I know some incredibly smart people who got mediocre SAT scores. Stack ranking is all about putting people at the bottom, not the top. (Top performers don’t need it and don’t get anything from it.)

The danger of garbage data (and, #YesAllData generated by stack ranking is garbage) is that people tend to use it as if it were truth. The 25th-percentile employee isn’t bad enough to get fired… but no one will take him for a transfer, because the “objective” record says he’s a slacker. The result of this– in conjunction with closed allocation, which is already a bad starting point– is permanent internal immobility. People with mediocre reviews can’t transfer because the manager of the target team would prefer a new hire (with no political strings attached) over a sub-50th-percentile internal. People with great reviews don’t transfer for fear of upsetting the gravy train of bonuses, promotions, and managerial favoritism. Team assignments become permanent, and people divide into warring tribes instead of collaborating. This total immobility also makes it impossible to do a layoff the right way (cutting complexity) because people develop extreme attachments to projects and policies that, if they were mobile and therefore disinterested, they’d realize ought to be cut. It becomes politically intractable to do the right thing, or even for the CEO to figure out what the right thing is. I’d argue, in fact, that performance reviews shouldn’t be part of a transfer packet at all. The added use of questionable, politically-laced “information” is just not worth the toxicity of putting that into policy.

A company with a warring-departments dynamic might seem like a streamlined, efficient, and (most importantly) less complex company. It doesn’t have the promiscuous social graph you might expect to see in an open allocation company. People know where they are, who they report to, and who their friends and enemies are. The problem, with this insight, is that there’s hot complexity and cold complexity. Cold complexity is passive and occasionally annoying, like a law from 1890 that doesn’t make sense and is effectively never enforced. When people collaborate “too much” and the social graph of the company seems to have “too many” edges, there’s some cold complexity there. It’s generally not harmful. Open allocation tends to generate some cold complexity. Rather than metastasize into an existential threat to the company, it will fade out of existence over time. Hot complexity, which usually occurs in an adversarial context, is a kind that generates more complexity. Its high temperature means there will be more entropy in the system. Example: a conflict (heat) emerges. That, alone, makes the social graph more complex because there are more edges of negativity. Systems and rules are put in place to try to resolve it, but those tend to have two effects. First, they bring more people (those who had no role in the initial conflict, but are affected by the rules) into the fights. Second, the conflicting needs or desires of the adversarial parties are rarely addressed, so both sides just game the new system, which creates more complexity (and more rules). Negativity and internal competition create the hot complexity that can ruin a company more quickly than an executive (even if acting with the best intentions) can address it.

Finally, one thing worth noting is the Welch Effect (named for Jack Welch, the inventor of stack-ranking). It’s one of my favorite topics because it has actually affected me. The Welch Effect pertains to the fact that when a broad-based layoff occurs, the people most likely to be let go aren’t the worst (or best) performers, but newest members of macroscopically underperforming teams. Layoffs (and stack ranking) generally propagate down the hierarchy. Upper management disburses bonuses, raises, and layoff quotas based on the macroscopic performance of the departments under it, and at each level, the node operators (managers) slice the numbers based on how well they think each suborganization did (plus or minus various political modifiers). At the middle-management layer, one level separated from the non-managerial “leaves”, it’s the worst-performing teams that have to vote the most people off the island. It tends to be those most recently hired who get the axe. This isn’t especially unfair or wrong, for that middle manager; there’s often no better way to do it than to strike the least-embedded, least-invested junior hire.

The end result of the Welch Effect, however, is that the people let go are often those who had the least to do with their team’s underperformance. (It may be a weak team, it may be a good team with a bad manager, or it may be an unlucky team.) They weren’t even there for very long! It doesn’t cause the firm to lay off good people, but it doesn’t help it lay off bad people either. It has roughly the same effect as a purely seniority-based layoff, for the company as a whole. Random new joiners are the ones who are shown out the door. It’s bad to lose them, but it rarely costs the company critical personnel. Its effect on that team is more visibly negative: teams that lose a lot of people during layoffs get a public stink about them, and people lose the interest in joining or even helping them– who wants to work for, or even assist, a manager who can’t protect his people?– so the underperforming team becomes even more underperforming. There are also morale issues with the Welch Effect. When people who recently joined lose their jobs (especially if they’re fired “for performance” without a severance) it makes the company seem unfair, random, and capricious. The ones let go were the ones who never had the chance to prove themselves. In a one-off layoff, this isn’t so destructive. The Welch Effected usually move on to better jobs anyway. However, when a company lays off in many small cuts, or disguises a layoff as a “low-performer initiative”, the Welch Effect firings demolish belief in meritocracy.

That, right there, explains why I get so much flak over how I left Google. Technically, I wasn’t fired. But I had a disliked, underdelivering manager who couldn’t get calibration points for his people (a macroscopic issue that I had nothing to do with) and I was the newest on the team, so I got a bad score (despite being promised a reasonable one– a respectable 3.4, if it matters– by that manager). Classic Welch Effect. I left. After I was gone I “leaked” the existence of stack ranking within Google. I wasn’t the first to mention that it existed there, but I publicized it enough to become the (unintentional) slayer of Google Exceptionalism and, to a number of people I’ve never met and to whom I’ve never done any wrong, Public Enemy #1. I was a prominent (and, after things went bad, fairly obnoxious) Welch Effectee, and my willingness to share my experience changed Google’s image forever. It’s not a disliked company (nor should it be) but its exceptionalism is gone. Should I have done all that? Probably not. Is Google a horrible company? No. It’s above average for the software industry (which is not an endorsement, but not damnation either.) Also, my experiences are three years old at this point, so don’t take them too seriously. As of November 2011, Google had stack ranking and closed allocation. It may have abolished those practices and, if it has, then I’d strongly recommend it as a place to work. It has some brilliant people and I respect them immensely.

In an ideal world, there would be no layoffs or contractions. In the real world, layoffs have to happen, and it’s best to do them honestly (i.e. don’t shit on departing employees’ reputations by calling it a “low performer initiative”). As with more minor forms of cost-cutting (e.g. new policies encouraging frugality) it can only be done if complexity (that being the cause of the organization’s underperformance) is reduced as well. That is the only kind of corporate change that can reverse underperformance: complexity reduction.

If complexity reduction is the only way out, then why is it so rare? Why do companies so willingly create personnel and regulatory complexity just to shave pennies off their expenses? I’m going to draw from my (very novice) Buddhist understanding to answer this one. When the clutter is cleared away… what is left? Phrases used to define it (“sky-like nature of the mind”) only explain it well to people who’ve experienced it. Just trust me that there is a state of consciousness that can be attained when gross thoughts are swept away, leaving something more pure and primal. Its clarity can be terrifying, especially the first time it is experienced. I really exist. I’m not just a cloud of emotions and thoughts and meat. (I won’t get into death and reincarnation and nirvana here. That goes farther than I need, for now. Qualia, or existence itself, as opposed my body hosting some sort of philosophical zombie, is both miraculous and the only miracle I actually believe in.) Clarity. Essence. Those are the things you risk encountering with simplicity. That’s a good thing, but it’s scary. There is a weird, paradoxical thing called “relaxation-induced anxiety” that can pop up here. I’ve fought it (and had some nasty motherfuckers of panic attacks) and won and I’m better for my battles, but none of this is easy.

So much of what keeps people mired in their obsessions and addictions and petty contests is an aversion to confronting what they really are, a journey that might harrow them into excellence. I am actually going to age and die. Death can happen at any time, and almost certainly it will feel “too soon”. I have to do something, now, that really fucking matters. This minute counts, because I may not get another in this life. People are actually addicted to their petty anxieties that distract them from the deeper but simpler questions. If you remove all the clutter on the worktable, you have to actually look at the table itself, and you have to confront the ambitions that impelled you to buy it, the projects you imagined yourself using it for (but that you never got around to). This, for many people, is really fucking hard. It’s emotionally difficult to look at the table and confront what one didn’t achieve, and it’s so much easier to just leave the clutter around (and to blame it).

Successful simplicity leads to, “What now?” The workbench is clear; what are we going to do with it? For an organization, such simplicity risks forcing it to contend with the matter of its purpose, and the question of whether it is excelling (and, relatedly, whether it should). That’s a hard thing to do for one person. It’s astronomically more difficult for a group of people with opposing interests, and among whom excellence is sure to be a dirty word (there are always powerful people who prefer rent-seeking complacency). It’s not surprising, then, that most corporate executives say “fuck it” on the excellence question and, instead, decide it suffices to earn their keep to squeeze employees with mindless cost-cutting policies: pooled sick leave and vacation, “employee contributions” on health plans, and other hot messes that just ruin everything. It feels like something is getting done, though. Useless complexity is, in that way, existentially anxiolytic and addictive. That’s why it’s so hard to kill. But it, if allowed to live, will kill. It can enervate a person into decoherence and inaction, and it will reduce a company to a pile of legacy complexity generated by self-serving agents (mostly, executives). Then it falls under the MacLeod-Gervais-Rao-Church theory of the nihilistic corporation; the political whirlpool that remains once an organization has lost its purpose for existing.

At 4528 words, I’ve said enough.

Silicon Valley and the Rise of the Disneypreneur

Someone once explained the Las Vegas gambling complex as “Disneyland for adults”, and the metaphor makes a fair amount of sense. The place sells a fantasy– expensive shows, garish hotels (often cheap or free if “comped”) and general luxury– and this suspension of reality enables people to take financial risks they’d usually avoid, giving the casino an edge. Comparing Silicon Valley to Vegas, also, makes a lot of sense. Even more than a Wall Street trading floor, it’s casino capitalism. Shall we search for some kind of transitivity? Yes, indeed. Is it possible that Silicon Valley is a sort of “Disneyland”? I think so.

It starts with Stanford and Palo Alto. The roads are lined with palm trees that do not grow there naturally, and cost tens of thousands of dollars a piece to plant. The whole landscape is designed and fake. In a clumsy attempt to lift terminology from Southern aristocrats, Stanford’s nickname is “the Farm”. At Harvard or Princeton, there’s a certain sense of noblesse oblige that students are expected to carry with them. A number of Ivy Leaguers eschew investment banking in favor of a program like Teach for America. Not so much at Stanford, which has never tempered itself with Edwardian gravity (by, for example, encouraging students to read literature from civilizations that have since died out) in the way that East Coast and Midwestern colleges have. The rallying cry is, “Go raise VC.” Then, they enter a net of pipelines: Stanford undergrad to startup, startup to EIR gig, EIR to founder, founder to venture capitalist. The miraculous thing about is that progress on this “entrepreneurial” path is assured. One never needs to take any risk to do it! Start in the right place, don’t offend the bosses-I-mean-investors, and there are three options: succeed, fail up, or fail diagonal-up. Since they live in an artificial world in which real loss isn’t possible for them, but one that also limits them from true innovation, they perform a sort of Disney-fied entrepreneurship. They’re the Disneypreneurs.

Just as private-sector bureaucrats (corporate executives) who love to call themselves “job creators” (and who only seem to agree on anything when they’re doing the opposite) are anything but entrepreneurs, I tend to think of these kids as not real entrepreneurs. Well, because I’m right, I should say it more forcefully. They aren’t entrepreneurs. They take no risk. They don’t even have to leave their suburban, no-winter environment. They don’t put up capital. They don’t risk sullying their reputations by investing their time in industries the future might despise; instead, they focus on boring consumer-web plays. They don’t go to foreign countries where they might not have all the creature comforts of the California suburbs. They don’t do the nuts-and-bolts operational grunt work that real entrepreneurs have to face (e.g. payroll, taxes) when they start new businesses, because their backers arrange it all for them. Even failure won’t disrupt their careers. If they fail, instead of making their $50-million payday sin this bubble cycle, they’ll have to settle for a piddling $750,000 personal take in an “acqui-hire”, a year in an upper-middle-management position, and an EIR gig. VC-backed “founders” take no real risk, but get rewarded immensely when things go their way. Heads, they win. Tails, they don’t lose.

Any time someone sets up a “heads I win, tails I-don’t-lose” arrangement, there’s a good bet that someone else is losing. Who? To some extent, it’s the passive capitalists whose funds are disbursed by VCs. Between careerist agents (VC partners) seeking social connection and status, and fresh-faced Disneypreneurs looking to justify their otherwise unreasonable career progress (due to their young age, questionable experience, and mediocrity of talent) what is left for the passive capitalist is a return inferior to that offered by a vanilla index fund. However, there’s another set of losers for whom I often prefer to speak, their plight being less well-understood: the engineers. Venture capitalists risk other peoples’ money. Founders risk losing access to the VCs if they do something really unethical. Engineers risk their careers. They’ve got more skin in the game, and yet their rewards are dismal.

If it’s such a raw deal to be a lowly engineer in a VC-funded startup (and it is) then why do so many people willingly take that offer? They might overestimate their upside potential, because they don’t know what questions to ask (such as, “If my 0.02% is really guaranteed to be worth $1 million in two years, then why do venture capitalists value the whole business at only $40 million?”). They might underestimate the passage of time and the need to establish a career before ageism starts hitting them. Most 22-year-olds don’t know what a huge loss it is not to get out of entry-level drudgery by 30. However, I think a big part of why it is so easy to swindle so many highly talented young people is the Disneyfication. The “cool” technology company, the Hooli, provides a halfway house for people just out of college. At Hooli, no one will make you show up for work at 9:00, or tell you not to wear sexist T-shirts, or expect you to interact decently with people too unlike you. You don’t even have to leave the suburbs of California. You won’t have to give up your car for Manhattan, your dryer for Budapest, your need to wear sandals in December for Chicago, or your drug habit for Singapore. It’s comfortable. There is no obvious social risk. Even the mean-spirited, psychotic policy of “stack ranking” is dressed-up as a successor to academic grading. (Differences glossed over are (a) that there’s no semblance of “meritocracy” in stack ranking; it’s pure politics, and a professor who graded as unfairly as the median corporate manager would be fired; (b) academic grading is mostly for the student’s benefit while stack-ranking scores are invariably to the worker’s detriment; and (c) universities genuinely try to support failing students while corporations use dishonest paperwork designed to limit lawsuit risk.) The comfort offered to the engineer by the Disney-fied tech world, which is actually more ruthlessly corporate (and far more undignified) than the worst of Wall Street, is completely superficial.

That doesn’t, of course, mean that it’s not real. Occasionally I’m asked whether I believe in God. Well, God exists. Supernatural beings may not, and the fictional characters featured in religious texts are almost certainly (if taken literally) pure nonsense, but the idea of God has had a huge effect on the world. It cannot be ignored. It’s real. The same of Silicon Valley’s style of “entrepreneurship”. Silicon Valley breathes and grows because, every year, an upper class of founders and proto-founders are given a safe, painless path to “entrepreneurial glory” and a much larger working class of delusional engineers are convinced to follow them. It really looks like entrepreneurship.

I should say one thing off the bat: Disneypreneurs are not the same thing as wantrapreneurs. You see more of the second type, especially on the East Coast, and it’s easy to conflate the two, but the socioeconomic distance is vast. The wantrapreneur can talk a big game, but lacks the drive, vision, and focus to ever amount to anything. He’s the sort of person who’s too arrogant to work for someone else, but can’t come up with a convincing reason why anyone should work for him, and doesn’t have the socioeconomic advantages that’d enable him to get away with bullshit. Except in the most egregious bubble times, he wouldn’t successfully raise venture capital, not because VCs are discerning but because the wantrapreneur usually lacks sufficient vision to learn how to do even that. Quite sadly, wantrapreneurs sometimes do find acolytes among the desperate and the clueless. They “network” a lot, sometimes find friends or relatives clueless enough to bankroll them, and produce little. Almost everyone has met at least one. There’s no barrier to entry in becoming a wantrapreneur.

Like a wantrapreneur, Disneypreneurs lack drive, talent, and willingness to sacrifice. The difference is that they still win. All the fucking time. Even when they lose, they win. Evan Spiegel (Snapchat) and Lucas Duplan (Clinkle) are just two examples, but Sean Parker is probably the most impressive. If you peek behind the curtain, he’s never actually succeeded at anything, but he’s a billionaire. They float from one manufactured success to another, build impressive reputations despite adding very little value to anything. They’re given the resources to take big risks and, when they fail, their backers make sure they fail up. Being dropped into a $250,000/year VP role at a more successful portfolio company? That’s the worst-case outcome. Losers get executive positions and EIR gigs, break-evens get acqui-hired into upper-six-figure roles, and winners get made.

One might ask: how does one become a Disneypreneur? I think the answer is clear: if you’re asking, you probably can’t. If you’re under 18, your best bet is to get into Stanford and hope your parents have the cardiac fortitude to see the tuition bill and not keel over. If you’re older, you might try out the (admirably straightforward, and more open to middle-class outsiders than traditional VC) Y Combinator. However, I think that it’s obvious that most people are never going to have the option of Disneypreneurship, and there’s a clear reason for that. Disneypreneurship exists to launder money (and connections, and prestige, and power; but those are highly correlated and usually mutually transferrable) for the upper classes, frank parasitism from inherited wealth being still socially unacceptable. The children of the elites must seem to work under the same rules as everyone else. The undeserving, mean-reverting progeny of the elite must be made to appear like they’ve earned the status and wealth their parents will bequeath upon them.

Elite schools were once intended toward this end. They were a prestige (multiple meanings intended) that appeared, from the outside, to be a meritocracy. However, this capacity was demolished by an often-disparaged instrument, the S.A.T. Sometimes, I’ll hear a knee-jerk leftist complain about the exam’s role in educational inequality, citing (correctly) the ability of professional tutoring (“test prep”, a socially useless service) to improve scores. In reality, the S.A.T. isn’t creating or increasing socioeconomic injustices in terms of access to education; it merely measures some of them. The S.A.T. was invented with liberal intentions, and (in fact) succeeded. After its inception in the 1920s, “too many” Jews were admitted to Ivy League colleges, and much of the “extracurricular” nonsense involved in U.S. college admissions was invented in a reaction to that. Over the following ninety years, there’s been a not-quite-monotonic movement toward meritocracy in college admissions. If I had to guess, college admissions are a lot more meritocratic than 90 years ago (and, if I’m wrong, it’s not because the admissions process is classist but because it’s so noise-ridden, thanks to technology enabling the application of a student to 15-30 colleges; 15 years ago, five applications was considered high). The ability-to-pay factor, however, keeps this meritocracy from being realized. Ties are, observably, broken on merit and there is enough meritocracy in the process to threaten the existing elite. The age in which a shared country-club membership of parent and admissions officer ensured a favorable decision is over. Now that assurance requires a building, which even the elite cannot always afford.

These changes, and the internationalization of the college process, and those pesky leftists who insist on meritocracy and diversity, have left the ruling classes unwilling to trust elite colleges to launder their money. They’ve shifted their focus to the first few years after college: first jobs. However, most of these well-connected parasites don’t know how to work and certainly can’t bear the thought of their children suffering the indignity of actually having to earn anything, so they have to bump their progeny automatically to unaccountable upper-management ranks. The problem is that very few people are going to respect a talentless 22-year-old who pulls family connections to get what he wants, and who gets his own company out of some family-level favor. Only a California software engineer would be clueless enough to follow someone like that– if that person calls himself “a founder”.

Why programmers can’t make any money: dimensionality and the Eternal Haskell Tax

To start this discussion, I’m going to pull down a rather dismal tweet from Chris Allen (@bitemyapp):

For those who don’t know, Haskell is a highly productive, powerful language that enables programmers (at least, the talented ones) to write correct code quickly: at 2 to 5 times the development speed of Java, with similar performance characteristics, and fewer bugs. Chris is also right on the observation that, in general, Haskell jobs don’t pay as well. If you insist on doing functional programming, you’ll make less money than people who sling C++ at banks with 30-year-old codebases. This is perverse. Why would programmers be economically penalized for using more powerful tools? Programmers are unusual, compared to rent-seeking executives, in actually wanting to do their best work. Why is this impulse penalized?

One might call this penalty “the Haskell Tax” and, for now, that’s what I’ll call it. I don’t think it exists because companies that use Haskell are necessarily cheaper or greedier than others. That’s not the case. I think the issue is endemic in the industry. Junior programmer salaries are quite high in times like 2014, but the increases for mid-level and senior programmers fall short of matching their increased value to the business, or even the costs (e.g. housing, education) associated with getting older. The only way a programmer can make money is to develop enough of a national reputation that he or she can create a bidding war. That’s harder to do for one who is strongly invested in a particular language. It’s not Haskell’s fault. There’s almost certainly a Clojure Tax and an Erlang Tax and a Scala Tax.

Beyond languages, this applies to any career-positive factor of a job. Most software jobs are career-killing, talent-wasting graveyards and employers know this, so when there’s a position that involves something interesting like machine learning, green-field projects, and the latest tools, they pay less. This might elicit a “well, duh” response, insofar as it shouldn’t be surprising that unpleasant jobs pay well. The reason this is such a disaster is because of its long-term effect, both on programmers’ careers and on the industry. Market signals are supposed to steer people toward profitable investment, but in software, it seems to fall the other way. Work that helps a programmer’s career is usually underpaid and, under the typical awfulness of closed allocation, jealously guarded, politically allocated, and usually won through unreasonable sacrifice.

Why is the Haskell Tax so damning?

As I said, the Haskell Tax doesn’t apply only to Haskell. It applies to almost all software work that isn’t fish-frying. It demolishes upper-tier salaries. One doesn’t, after all, get to be an expert in one’s field by drifting. It takes focus, determination, and hard work. It requires specialization, almost invariably. With five years of solid experience, a person can add 3 to 50 times more value than the entry-level grunt. Is she paid for that? Almost never. Her need to defend her specialty (and refuse work that is too far away from it) weakens her position. If she wants to continue in her field, there are a very small number of available jobs, so she won’t have leverage, and she won’t make any money. On the other hand, if she changes specialty, she’ll lose a great deal of her seniority and leverage, she’ll be competing with junior grunts, and so she won’t make any money either. It’s a Catch-22.

This puts an economic weight behind the brutality and incivility of closed allocation. It deprives businesses of a great deal of value that their employees would otherwise freely add. However, it also makes people less mobile, because they can’t move on to another job unless a pre-defined role exists matching their specialties. In the long run, the effect of this is to provide an incentive against expertise, to cause the skills of talented programmers to rot, and to bring the industry as a whole into mediocrity.

Code for the classes and live with the masses. Code for the masses and live… with the masses.

Artists and writers have a saying: sell to the masses, and live with the classes; sell to the classes, and live with the masses. That’s not really a statement about social class as about the low economic returns of high-end work. Poets don’t make as much money as people writing trashy romance novels. We might see the Haskell Tax as an extension of this principle. Programmers who insist on doing only the high-end work (“coding for the classes”) are likely to find themselves either often out of work, or selling themselves at a discount.

Does this mean that every programmer should just learn what is learned in 2 years at a typical Java job, and be done with it? Is that the economically optimal path? The “sell to the masses” strategy is to do boring, line-of-business, grunt work. Programmers who take that tack still live with the masses. That kind of programming (parochial business logic) doesn’t scale. There’s as much work, for the author, in writing a novel for 10 people as 10 million; but programmers don’t have that kind of scalability, and the projects where there are opportunities for scaling, growth, and multiplier-type contributions are the “for the classes” projects that every programmer wants to do (we already discussed why those don’t pay). So, programming for the masses is just as much of a dead end, unless they can scale up politically– that is, become a manager. At that point, they can sell code, but they don’t get to create it. They become ex-technical, and ex-technical management (with strongly held opinions, once right but now out of date) can be just as suboptimal as non-technical management.

In other words, the “masses” versus “classes” problem looks like this, for the programmer: one can do high-end work and be at the mercy of employers because there’s so little of it to go around, or to low-end commodity work that doesn’t really scale. Neither path is going to enable her to buy a house in San Francisco.

Dimensionality

One of the exciting things about being a programmer is that the job always changes. New technologies emerge, and programmers are expected to keep abreast of them even when their employers (operating under risk aversion and anti-intellectualism) won’t budget the time. What does it mean to be a good programmer? Thirty years ago, it was enough to know C and how to structure a program logically. Five years ago, a software engineer was expected to know a bit about a Linux, MySQL, a few languages (Python, Java, C++, Shell) and the tradeoffs among them. In 2014, the definition of “full stack” has grown to the point that almost no one can know all of it. Andy Shora (author of the afore-linked essay) puts it beautifully, on the obnoxiousness of the macho know-it-all programmer:

I feel the problem for companies desperate to hire these guys and girls, is that the real multi-skilled developers are often lost in a sea of douchebags, claiming they know it all.

Thirty years ago, there was a reasonable approximation of a linear ordering on programmer skill. If you could write a C compiler, understood numerical stability, and could figure out how to program in a new language or for a new platform by reading the manuals, you were a great programmer. If you needed some assistance and often wrote inefficient algorithms, you were either a junior or mediocre. In 2014, it’s not like that at all; there’s just too much to learn and know! I don’t know the first thing, for example, about how to build a visually appealing casual game. I don’t expect that I’d struggle as much with graphics as many do, because I’m comfortable with linear algebra, and I would probably kill it when it comes to AI and game logic, but the final polish– the difference between Candy Crush and an equivalent but less “tasty” game– would require someone with years of UI/UX experience.

The question of, “What is a good programmer?”, has lost any sense of linear ordering. The field is just too vast. It’s now an N-dimensional space. This is one of the things that makes programming especially hostile to newcomers, to women, and non-bullshitters of all stripes. The question of which of those dimensions matter and which don’t is political, subjective, and under constant change. One year, you’re a loser if you don’t know a scripting language. The next, you’re a total fuckup if you can’t explain what’s going on inside the JVM. The standards change at every company and frequently, leaving most people not only at a loss regarding whether they are good programmers, but completely without guidance about how to get there. This also explains the horrific politics for which software engineering is (or, at least, ought to be) notorious. Most of the “work” in a software company is effort spent trying to change the in-house definition of a good programmer (and, to that end, fighting incessantly over tool choices).

I don’t think that dimensionality is a bad thing. On the contrary, it’s a testament to the maturity and diversity of the field. The problem is that we’ve let anti-intellectual, non-technical businessmen walk in and take ownership of our industry. They demand a linear ordering of competence (mostly, for their own exploitative purposes). It’s the interaction between crass commercialism and dimensionality that causes so much pain.

Related to this is the Fundamental Hypocrisy of Employers, a factor that makes it damn hard for a programmer to navigate this career landscape. Technology employers demand specialization in hiring. If you don’t have a well-defined specialty and unbroken career progress toward expertise in that field, they don’t want to talk to you. At the same time, they refuse to respect specialties once they’ve hired people, and people who insist on protecting their specialties (which they had to do to get where they are) are downgraded as “not a team player”. Ten years of machine learning experience? Doesn’t matter, we need you to fix this legacy Rails codebase. It’s ridiculous, but most companies demand an astronomically higher quality of work experience than they give out. The result of this is that the game is won by political favorites and self-selling douchebags, and most people in either of those categories can’t really code.

The Eternal Haskell Tax

The Haskell Tax really isn’t about Haskell. Any programmer who wishes to defend a specialty has a smaller pool of possible jobs and will generally squeeze less money out of the industry. As programming becomes more specialized and dimensional, the Haskell Tax problem affects more people. The Business is now defining silos like “DevOps” and “data science” which, although those movements began with good intentions, effectively represent the intentions of our anti-intellectual colonizers to divide us against each other into separate camps. The idea (which is fully correct, by the way) that a good Haskell programmer can also be a good data scientist or operations engineer is threatening to them. They don’t want a fluid labor market. Our enemies in The Business dislike specialization when we protect our specialties (they want to make us interchangeable, “full stack” generalists) but, nonetheless, want to keep intact the confusion and siloization that dimensionality creates. If the assholes in charge can artificially disqualify 90 percent of senior programmers from 90 percent of senior programming jobs based on superficial differences in technologies, it means they can control us– especially if they control the assignment of projects, under the pogrom that is closed allocation– and (more importantly) pay us less.

The result of this is that we live under an Eternal Haskell Tax. When the market favors it, junior engineers can be well-paid. But the artificial scarcities of closed allocation and employer hypocrisy force us into unreasonable specialization and division, making it difficult for senior engineers to advance. Engineers who add 10 times as much business value as their juniors are lucky to earn 25 percent more; they, as The Business argues, should consider themselves fortunate in that they “were given” real projects!

If we want to fix this, we need to step up and manage our own affairs. We need to call “bullshit” on the hypocrisy of The Business, which demands specialization in hiring but refuses to respect it internally. We need to inflict hard-core Nordic Indignation on closed allocation and, in general, artificial scarcity. Dimensionality and specialization are not bad things at all (on the contrary, they’re great) but we need to make sure that they’re properly managed. We can’t trust this to the anti-intellectual colonial authorities who currently run the software industry, who’ve played against us at every opportunity. We have to do it ourselves.

Why there are so few AI jobs

Something began in the 1970s that has been described as “the AI winter”, but to call it that is to miss the point, because the social illness it represents involves much more than artificial intelligence (AI). AI research was one of many casualties that came about as anti-intellectualism revived itself and society fell into a diseased state.

One might call the “AI winter” (which is still going on) an “interesting work winter” and it pertains to much more of technology than AI alone, because it represented a sea change in what it meant to be a programmer. Before the disaster, technology jobs had an R&D flavor, like academia but with better pay and less of the vicious politics. After the calamitous 1980s and the replacement of R&D by M&A, work in interesting fields (e.g. machine learning, information retrieval, language design) became scarce and over 90% of software development became mindless, line-of-business makework. At some point, technologists stopped being autonomous researchers and started being business subordinates and everything went to hell. What little interesting work remained was only available in geographic “super-hubs” (such as Silicon Valley) where housing prices are astronomical compared to the rest of the country. Due to the emasculation of technology research in the U.S., economic growth slowed to a crawl, and the focus of the nation’s brightest minds turned to creation of asset bubbles (seen in 1999, 2007, and 2014) rather than generating long-lasting value.

Why did this happen? Why did the entrenched public- and private-sector bureaucrats (with, even among them, the locus of power increasingly shifting to private-sector bureaucrats, who can’t be voted out of office) who run the world lose faith in the research being done by people much smarter, and who work much harder, than them? The answer is simple. It’s not even controversial. End of the Cold War? Nah, it began before that. At fault is the lowly perceptron.

Interlude: a geometric puzzle

This is a simple geometry puzzle. Below are four points at the corners of the square, colored (and numbered) like so:

0 1
1 0

Is it possible to draw a line that separates the red points (0′s) from the green points (1′s)?

The answer is that it’s not possible. Any separating line would have to separate two points from each other. Now draw a circle passing through all four points. Any line can intersect that circle at no more than two points. Therefore, a line separating two points from the other two would have to separate two adjacent points, which would be of opposing colors. It’s not possible. Another way to say this is that the classes (colors) aren’t linearly separable.

What is a perceptron?

“Perceptron” is a fancy name given to a mathematical function with a simple description. Let w be a known “weight” vector (if that’s an unfamiliar term, a list of numbers) and x be an input “data” vector of the same size, with the caveat that x[0] = 1 (a “bias” term) always. The perceptron, given w, is a virtual “machine” that computes, for any given input x, the following:

  • 1, if w[0]*x[0] + … + w[n]*x[n] > 0,
  • 0, if w[0]*x[0] + … + w[n]*x[n] < 0.

In machine learning terms, it’s a linear classifier. If there’s a linear function that cleanly separates the “Yes” class (the 1 values) from the “No” class (the 0 values) it can be expressed as a perceptron. There’s an elegant algorithm for, in that linearly separable case, finding a working weight vector. It always converges.

A mathematician might say, “What’s so interesting about that? It’s just a dot product being passed through a step function.” That’s true. Perceptrons are very simple. A single perceptron can solve more decision problems than one might initially think, but it can’t solve all of them. It’s too simple a model.

Limitations

Let’s say that you want to model an XOR (“exclusive or”) gate, corresponding to the following function:

| in_1 | in_2 | out |
+------+------+-----+
|   0  |   0  |  0  |
|   0  |   1  |  1  |
|   1  |   0  |  1  |
|   1  |   1  |  0  |
+------+------+-----+

One might recognize that this is identical to the “brainteaser” above, with in_1 and in_2 corresponding to the x– and y– dimensions in the coordinate plane. This is the same problem. This function is nonlinear; it could be expressed as f(x, y) = x + y – 2xy. and that’s arguably the simplest representation of it that works. A separating “plane” in the 2-dimensional space of the inputs would be a line, and there’s no line separating the two classes. It’s mathematically obvious that the perceptron can’t do it. I showed this, above, using high-school geometry.

To a mathematician, this isn’t surprising. Marvin Minsky pointed out the mathematically evident limitations of a single perceptron. One can model intricate mathematical functions with more complex networks of perceptrons and perceptron-like units, called artificial neural networks. They work well. One can also, using what are called “basis expansions”, generate further dimensions from existing data in order to create a higher-dimensional space in which linear classifiers still work. (That’s what people usually do with support vector machines, which provide the machinery to do so efficiently.) For example, adding xy as a third “derived” input dimension would make the classes (0′s and 1′s) linearly separable. There’s nothing mathematically wrong with doing that; it’s something that statisticians do when they want to build complex models but still have some of the analytic properties of simpler ones, like linear regression or nearest-neighbor modeling.

The limitations of the single perceptron do not invalidate AI. At least, they don’t if you’re a smart person. Everyone in the AI community could see the geometrically obvious limitation of a single perceptron, and not one of them believed that it came close to invalidating their work. It only proved that more complex models were needed for some problems, which surprised no one. Single-perceptron models might still be useful for computational efficiency (in the 1960s, computational power was about a billion times as expensive as now) or because the data don’t support a more complex model; they just couldn’t learn or model every pattern.

In the AI community, there was no scandal or surprise. That some problems aren’t linearly separable is not surprising. However, some nerd-hating non-scientists (especially in business upper management) took this finding to represent more than it actually did.

They fooled us! A brain with one neuron can’t have general intelligence!

The problem is that the world is not run, and most of the wealth in it is not controlled, by intelligent people. It’s run by social-climbing empty-suits who are itching for a fight and would love to take some “eggheads” down a notch. Insofar as an artificial neural network models a brain, a perceptron models a single neuron, which can’t be expected to “think” at all. Yet the fully admitted limitations of a single perceptron were taken, by the mouth-breathing muscleheads who run the world, as an excuse to shit on technology and pull research funding because “AI didn’t deliver”. That produced an academic job market that can only be described as a pogrom, but it didn’t stop there. Private-sector funding dried up as short-term, short-tempered management came into vogue.

To make it clear, no one ever said that a single perceptron can solve every decision problem. It’s a linear model. That means it’s restricted, intentionally, to a small subspace of possible models. Why would people work with a restricted model? Traditionally, it was for a lack of data. (We’re in the 1960s and ’70s, when data was contained on physical punch cards and a megabyte weighed something and a disk drive cost more than a car.) If you don’t have a lot of data, you can’t build complex models. For many decision problems, the humble perceptron (like its cousins, logistic regression and support vector machines) did well and, unlike other computationally intensive linear classification methods (such as logistic regression, which requires gradient descent, or a variant thereof, over the log-likelihood surface; or such as the support vector machine, which are a quadratic programming problem that we didn’t know how to solve efficiently until the 1990s) it could be trained with minimal computational expense, in a bounded amount of time. Even today, linear models are surprisingly effective for a large number of problems. For example, the first spam classifiers (Naive Bayes) operated using a linear model, and it worked well. No one was claiming that a single perceptron was the pinnacle of AI. It was something that we could build cheaply on 1970-era hardware and that could build a working model on many important datasets.

Winter war

Personally, I don’t think that the AI Winter was an impersonal, passive event like the changes of seasons. Rather, I think it was part of a deliberate resurgence of anti-intellectualism in a major cultural war– one which the smart people lost. The admitted limitations of one approach to automated decision-making gave the former high school bullies, now corporate fat cats, all the ammo they needed in order to argue that those “eggheads” weren’t as smart as they thought they were. None of them knew exactly what a perceptron or an “XOR gate” were, but the limitation that I’ve described was morphed into “neural networks can’t solve general mathematical problems” (arguably untrue) and that turned into “AI will never deliver”. In the mean-spirited and anti-liberal political climate of the 1980s, this was all that anyone needed as an excuse to cut public funding. The private sector not only followed suit, but amplified the trend. The public cuts were a mix of reasonable fiscal conservatism and mean-spirited anti-research sentiment, but the business elites responded strongly to (and took to a whole new level) the mean-spirited aspect, flexing their muscles as elitism (thought vanquished in the 1930s to ’50s) became “sexy” again in the Reagan Era. Basic research, which gave far too much autonomy and power to “eggheads”, was slashed, marginalized, and denigrated.

The claim that “AI didn’t deliver” was never true. What actually happened is that we solved a number of problems, once thought to require human intelligence, with a variety of advanced statistical means as well as some insights from fields like physics, linguistics, ecology and economics. Solving problems demystified them. Automated mail sorting, once called “artificial intelligence”, became optical character recognition. This, perhaps, was part of the problem. Successes in “AI” were quickly put into a new discipline. Even modern practitioners of statistical methods are quick to say that they do machine learning, not AI. What was actually happening is that, while we were solving specific computational problems once thought to require “intelligence”, we found that our highly specialized solutions did well on the problems they were designed for, and could be adapted to similar problems, but with very slow progress toward general intelligence. As it were, we’ve learned in recent decades that our brains are even more complicated than we thought, with a multitude of specialized modules. That no specific statistical algorithm can replicate all of them, working together in real time, shouldn’t surprise anyone. Is this an issue? Does it invalidate “AI” research? No, because most of those victories, while they fell short of replicating a human brain, still delivered immense economic value. Google, although it eventually succumbed to the sociological fragility and failure that inexorably follow closed allocation, began as an AI company. It’s now worth over $360 billion.

Also mixed in with the anti-AI sentiment is the religious aspect. It’s still an open and subjective question what human intelligence really is. The idea that human cognition could be replicated by a computer offended religious sentiments, even though few would consider automated mail sorting to bear on unanswerable questions about the soul. I’m not going to go deep into this philosophical rabbit hole, because I think it’s a waste of time to debate why people believe AI research (or, for a more popular example, evolution by natural selection) to offend their religious beliefs. We don’t know what qualia is or where it comes from. I’ll just leave it at this. If we can use advanced computational techniques to solve problems that were expensive, painful, or impossible given the limitations of human cognition, we should absolutely do it. Those who object to AI on religious grounds fear that advanced computational research will demystify cognition and bring about the end of religion. Ignoring the question of whether an “end of religion” is a bad thing, or what “religion” is, there are two problems with this. First, if there is something to us that is non-material, we won’t be able to replicate it mechanically and there is no harm, to the sacred, in any of this work. Second, computational victories in “AI” tend to demystify themselves and the subfield is no longer considered “AI”. Instead, it’s “optical character recognition” or “computer game-playing”. Most of what we use on a daily basis (often behind the scenes, such as in databases) comes from research that was originally considered “artificial intelligence”.

Artificial intelligence research has never told us, and will never tell us, whether it is more reasonable to believe in gods and religion or not to believe. Religion is often used by corrupt, anti-intellectual, politicians and clerics to rouse sentiment against scientific progress, as if automation of human grunt work were a modern-day Tower of Babel. Yet, to show what I mean by AI victories demystifying themselves, almost none would hesitate to use Google, a web-search service powered by AI-inspired algorithms.

Why do the anti-intellectuals in politics and business wish to scare the public with threats of AI-fueled irreligion and secularism (as if those were bad things)? Most of them are intelligent enough to realize that they’re making junk arguments. The answer, I think, is about raw political dominance. As they see it, the “nerds” with their “cushy” research jobs can’t be allowed to (gasp!) have good working conditions.

The sad news is that the anti-intellectuals are likely to take the economy and society down with them. In the 1960s, when we were putting billions of dollars into “wasteful” research spending, the economy grew at a record pace. The world economy was growing at 5.7 percent per year, and the U.S. economy was the envy of the world. Now, in our spartan time of anti-intellectualism, anti-science sentiment, and corporate elitism, the economy is sluggish and the society is stagnant– all because the people in charge can’t stand to see “eggheads” win.

Has AI “delivered”?

If you’re looking to rouse religious fear and fury, you might make a certain species of fantastic argument against “artificial intelligence”. The truth of the matter, however, is that while we’ve seen domain-specific superiority of machines over human intelligence in rote processes, we’re still far from creating an artificial general intelligence, i.e. a computational entity that can exhibit the general learning capability of a human. We might never do it. We might not need to and, I would argue, we should not if it is not useful.

In a way, “artificial intelligence” is a defined-by-exclusion category of “computational problems we haven’t solved yet”. Once we figure out how to make computers better at something than humans are, it becomes “just computation” and is taken for granted. Few believe they’re using “an AI” when they use Google for web search, because we’re now able to conceive of the computational work it does as mechanical rather than “intelligent”.

If you’re a business guy just looking to bully some nerds, however, you aren’t going to appeal to religion. You’re going to make the claim that all this work on “artificial intelligence” hasn’t “delivered”. (Side note: if someone uses “deliver” intransitively, as business bullies are wont to do, you should punch that person in the face.) Saying someone or something isn’t “delivering” is a way to put false objectivity behind a claim that means nothing other than “I don’t like that person”. As for AI, it’s true that artificial general intelligence has eluded us thus far, and continues to do so. It’s an extremely hard problem: far harder than the optimists among us thought it would be, fifty years ago. However, the CS research community has generated a hell of a lot of value along the way.

The disenchantment might be similar to the question about “flying cars”. We actually have them. They’re called small airplanes. In the developed world, a person of average means can learn how to fly one. They’re not even that much more expensive than cars. The reason so few people use airplanes for commuting is that it just doesn’t make economic sense for them: the savings of time don’t justify increased fuel and maintenance costs. But a middle-class American or European can, if she wants, have a “flying car” right now. It’s there. It’s just not as cheap or easy to use as we’d like. With artificial intelligence, that research has brought forth a ridiculous number of victories and massive economic growth. It just hasn’t brought forth an artificial general intelligence. That’s fine; it’s not clear that we need to build one in order to get the immense progress that technologists create when given the autonomy and support.

Back to the perceptron

One hard truth I’ve learned is that any industrial effort will have builders and politicians. It’s very rare that someone is good at both. In the business world, those unelected private-sector politicians are called “executives”. They tend, for a variety of reasons, to put themselves into pissing contests with the builders (“eggheads”) who are actually making stuff. One time-tested way to show up the builders is to take something that is obviously true (leading the builders to agree with the presentation) but present it out of context in a way that is misleading.

The incapacity of the single perceptron at general mathematical modeling is a prime example of this. Not one AI researcher was surprised that such a simple model couldn’t describe all patterns or equational relationships. The fact that can be proven (as I did) with high school geometry. That a single perceptron can’t model a key logical operation is, as above, obviously true. The builders knew it, and agree. Unfortunately, what the builders failed to see was that the anti-intellectual politicians were taking this fact way out of context, using the known limitations of a computational building block to ascribe limitations (that did not exist) to general structures. This led to the general dismantling of public, academic, and private support for technological research, an anti-intellectual and mean-spirited campaign that continues to this day.

That’s why there are so few AI jobs.

Technology’s Loser Problem

I’m angry. The full back story isn’t worth getting into, but there was a company where I applied for a job in the spring of 2013: to build a company’s machine learning infrastructure from scratch. It was a position of technical leadership (Director equivalent, but writing code with no reports) and I would have been able to use Clojure. As it were, I didn’t get it. They were looking for someone more experienced, who’d built those kinds of systems before, and wouldn’t take 6 months to train up to the job. That, itself, is not worth getting angry about. Being turned down happens, especially at high levels.

I found out, just now, that the position was not filled. Not then. Not 6 months later. Not to this day, more than a year later. It has taken them longer to fill the role than it would have taken for me to grow into it.

When they turned me down, it didn’t faze me. I thought they’d found a better candidate. That happens; only thing I can do is make myself better. I found myself, however, a bit irked when I found out that they hadn’t filled the position for longer than it would have taken me to gain the necessary experience. I lost, and so did they.

That’s not what makes me angry. Rationally, I realize that most companies aren’t going to call back a pretty-good candidate they rejected because they had just opened the position and they thought they could do better (if you’re the first 37% of candidates for a job, it makes sense for them not to choose you and, empirically, first and second applicants for a high-level position rarely get it). That’s the sort of potentially beneficial but extremely awkward social process that just won’t happen. What makes me angry is the realization of how common a certain sort of decision is in the technology world. We make a lot of lose-lose decisions that hurt all of us. Extremely specific hiring requirements (that, in bulk, cost the company more in waiting time than training a 90% match up to the role) are just the tip of the iceberg.

You know those people who complain about the lack of decent <gender of sexual interest> but (a) reject people for the shallowest, stupidest reasons, (b) aren’t much of a prize and don’t work to better themselves, and (c) generally refuse to acknowledge that the problem is rooted in their own inflated perception of their market value? That’s how I feel every time I hear some corporate asswipe complain about a “talent shortage” in technology. No, there isn’t one. You’re either too stingy or too picky or completely inept at recruiting, because there’s a ton of underemployed talent out there.

Few of us, as programmers, call the initial shots. We’ve done a poor job of making The Business listen to us. However, when we do have power, we tend to fuck it up. One of the problems is that we over-comply with what The Business tells us it whats. For example, when a nontechnical CEO says, “I only want you to hire absolute rock stars”, what he actually means is, “Don’t hire an idiot just to have a warm body or plug a hole”. However, because they tend to be literal, over-compliant, and suboptimal, programmers will interpret that to mean, “Reject any candidate who isn’t 3 standard deviations above the mean.” The leads to positions not being filled, because The Business is rarely willing to pay what one standard deviation above the mean costs, let alone three.

Both sides now

I’ve been on both sides of the interviewing and hiring process. I’ve seen programmers’ code samples described with the most vicious language over the most trivial mistakes, or even stylistic differences. I’ve seen job candidates rejected for the most god-awful stupid reasons. In one case, the interviewer clearly screwed up (he misstated the problem in a way that made it impossible) but, refusing to risk face by admitting the problem was on his end, he claimed the candidate failed the question. Another was dinged on a back-channel reference (don’t get me started on that sleazy practice, which ought to be illegal) claiming, without any evidence, that “he didn’t do much” on a notable project four years ago. I once saw an intern denied a full-time offer because he lived in an unstylish neighborhood. (The justification was that one had to be “hungry”, mandating Manhattan.) Many of us programmers are so butthurt about not being allowed to sit at the cool kids’ table that, when given the petty power associated with interviewing other programmers, the bitch-claws come out in a major way.

Having been involved in interviewing and recruiting, I’ll concur that there are a significant number of untalented applicants. If it’s 99.5 percent, you’re doing a lot of things wrong, but most resumes do come from people way out of their depth. Moreover, as with dating, there’s an adverse weighting in play. Most people aren’t broken, but broken people go on orders of magnitude more dates than everyone else, which is why most peoples’ dating histories have a disproportionate representation of horror stories, losers, and weirdos. It’s the same with hiring, but phone screening should filter against that. If you’re at all good at it, about half of the people brought in-office will be solid candidates.

Of course, each requirement cuts down the pool. Plenty of companies (in finance, some officially) have a “no job hopper” or “no unemployeds” rule. Many mandate high levels of experience in new technologies (even though learning new technologies is what we’re good at). Then, there are those who are hung up on reference checking in weird and creepy ways. I know of one person who proudly admits that his reference checking protocol is to cold-call a random person (again, back-channel) is the candidate’s network and ask the question, without context, “Who is the best person you’ve ever worked with?” If anyone other than the candidate is named, the candidate is rejected. That’s not being selective. That’s being an invasive, narcissistic idiot. Since each requirement reduces the size of qualified people, it doesn’t take long before the prejudices winnow an applicant pool down to zero.

Programmers? Let’s be real here, we kinda suck…

As programmers, we’re not very well-respected, and when we’re finally paid moderately well, we let useless business executives (who work 10-to-3 and think HashMap is a pot-finding app) claim that “programmer salaries are ridiculous”. (Not so.) Sometimes (to my horror) you’ll hear a programmer even agree that our salaries are “ridiculous”. Fuck that bullshit; it’s factually untrue. The Business is, in general, pretty horrible to us. We suffer under closed allocation, deal with arbitrary deadlines, and if we don’t answer to an idiot, we usually answer to someone else who does. Where does the low status of programmers come from? Why are we treated as cost centers instead of partners in the business? Honestly… much of the problem is us. We’ve failed to manage The Business, and the result is that it takes ownership of us.

Most of the time, when a group of people is disproportionately successful, the cause isn’t any superiority of the average individual, but a trait of the group: they help each other out. People tend to call these formations “<X> Mafia” where X might be an ethnicity, a school, or a company. Y Combinator is an explicit, pre-planned attempt to create a similar network; time will tell if it succeeds. True professions have it. Doctors look out for the profession. With programmers, we don’t see this. There isn’t a collective spirit: just long email flamewars about tabs versus spaces. We don’t look out for each other. We beat each other down. We sell each other out to non-technical management (outsiders) for a shockingly low bounty, or for no reason at all.

In many investment banks, there’s an established status hierarchy in which traders and soft-skills operators (“true bankers”) are at the top, quants are in the middle, and programmers (non-quant programmers are called “IT”) are even lower. I asked a high-ranking quant why it was this way, and he explained it in terms of the “360 degree” performance reviews. Bankers and traders all gave each other top ratings, and wrote glowing feedback for minor favors. They were savvy enough to figure out that it was best for them to give great reviews up, down, and sideways, regardless of their actual opinions. Quants tended to give above-average ratings and occasionally wrote positive feedback. IT gave average ratings for average work and plenty of negative feedback. The programmers were being the most honest, but hurting each other in the process. The bankers and traders were being political, and that’s a good thing. They were savvy enough to know that it didn’t benefit them to sell each other out to HR and upper management. Instead, they arranged it so they all got good ratings and the business had to, at baseline, appreciate and reward all of them. While it might seem that this hurt top performers, it had the opposite effect. If everyone got a 50 percent bonus and 20% raise, management had to give the top people (and, in trading, it’s pretty obvious who those are) even more.

Management loves to turn high performers against the weak, because this enables management to be stingy on both sides. The low performers are fired (they’re never mentored or reassigned) and the high performers can be paid a pittance and still have a huge bonus in relative terms (not being fired vs. being fired). What the bankers were smart enough to realize (and programmers, in general, are not) is that performance is highly context-driven. Put eight people of exactly equal ability on a team to do a task and there will be one leader, two or three contributors, and the rest will be marginal or stragglers. It’s just more efficient to have the key knowledge in a small number of heads. Open source projects work this way. What this means is that, even if you have excellent people and no bad hires, you’ll probably have some who end up with not much to show for their time (which is why open allocation is superior; they can reassign themselves until they end up in a high-impact role). If management can see who is in what role, it can fire the stragglers and under-reward the key players (who, because they’re already high performers, are probably motivated by things other than money… at least, for now). The bankers and traders (and, to a lesser extent, the quants) had the social savvy and sense to realize that it was best that upper management not know exactly who was doing what. They protected each other, and it worked for them. The programmers, on the other hand, did not, and this hurt top performers as well as those on the bottom.

Let’s say that an investment bank tried to impose tech-company stack ranking on its employees, associate level and higher. (Analyst programs are another matter, not to be discussed here.) Realizing the mutual benefit in protecting each other, the bankers would find a way to sabotage the process by giving everyone top ratings, ranking the worst employees highly, or simply refusing to do the paperwork. And good for them! Far from being unethical, this is what they should do: collectively work The Business to get what they’re actually worth. Only a programmer would be clueless enough to go along with that nonsense.

In my more pessimistic moods, I tend to think that we, as programmers, deserve our low status and subordinacy. As much as we love to hate those “business douchebags” there’s one thing I will say for them. They tend to help each other out a lot more than we do. Why is this? Because they’re more political and, again, that might not be a bad thing. Ask a programmer to rate the performance of a completely average colleague and you’ll get an honest answer: he was mediocre, we could have done without him. These are factual statements about average workers, but devastating when put into words. Ask a product manager or an executive about an average colleague and you’ll hear nothing but praise: he was indispensable, a world-class player, best hire in ten years. They realize that it’s politically better for them, individually and as a group, to keep their real opinions to themselves and never say anything that could remotely endanger another’s career. Even if that person’s performance was only average, why make an enemy when one can make a friend?

“Bad code”

Let’s get to another thing that we do, as programmers, that really keeps us down. We bash the shit out of each other’s code and technical decision-making, often over minutiae.

I hate bad code. I really do. I’ve seen plenty of it. (I’ve written some, but I won’t talk about that.) I understand why programmers complain about each other’s code. Everyone seems to have an independent (and poorly documented) in-head culture that informs how he or she writes code, and reading another person’s induces a certain “culture shock”. Even good code can be difficult to read, especially under time pressure. And yes, most large codebases have a lot of code in them that’s truly shitty, sometimes to the point of being nearly impossible to reason about. Businesses have failed because of code quality problems, although (to tell the whole story) it’s rare that one bad programmer can do that much damage. The worst software out there isn’t the result of one inept author, but the result of code having too many authors, often over years. It doesn’t help that most companies assign maintenance work to either to junior programmers, or demoted (and disengaged) senior ones, neither category having the power to do it right.

I’d be the last one to come out and defend bad code. That said, I think we spend too much time complaining about each other’s code– and, worse yet, we tend toward the unforgivable sin of complaining to the wrong people. A technical manager has, at least, the experience and perspective to know that, at some level, every programmer hates other peoples’ code. But if that programmer snitches to a non-technical manager and executive,  well… you’ve just invited a 5-year-old with a gun to the party. Someone might get fired because “tabs versus spaces” went telephone-game into “Tom does shoddy work and is going to destroy the business”. Because executives are politically savvy enough to protect the group, and only sell each other out in extreme circumstances, what started out as a stylistic disagreement sounds (to the executive ear) like Tom (who used his girlfriend’s computer to fix a production problem at 11:45 on a Friday night, the tabs/spaces issue being for want of an .emacs.d) is deliberately destroying the codebase and putting the whole company at risk.

As programmers, we sell each other out all the time. If we want to advance beyond reasonable but merely upper-working class salaries, and be more respected by The Business, we have to be more careful about this kind of shit. I’ve heard a great number of software engineers say things like, “Half of all programmers should just be fired.” Now, I’ll readily agree that there are a lot of badly-trained programmers out there whose lack of skill causes a lot of pain. But I’m old enough to know that people come to a specific point from a multitude of paths and that it’s not useful to personalize this sort of thing. Also, regardless of what we may think as individuals, almost no doctor or banker would ever say, to someone outside his profession, “half of us should be fired”. They’re savvy enough to realize the value of protecting the group, and handling competence and disciplinary matters internally. Whether to fire, censure, mentor or praise is too important a decision to let it happen outside of our walls.

There are two observations about low-quality code, one minor and one major. The minor one is that code has a “all of us is worse than any of us” dynamic. As more hands pass over code, it tends to get worse. People hack the code needing specific features, never tending to the slow growth of complexity, and the program evolves over time into something that nobody understands because too many people were involved in it. Most software systems fall to pieces not because of incompetent individuals, but because of unmanaged growth of complexity. The major point on code-quality is: it’s almost always management’s fault.

Bad code comes from a multitude of causes, only one of which is low skill in programmers. Others include unreasonable deadlines, unwillingness to attack technical debt (a poor metaphor, because the interest rate on technical “debt” is both usurious and unpredictable), bad architecture and tooling choices, and poor matching of programmers to projects. Being stingy, management wants to hire the cheapest people it can find and give them the least time possible in which to do the work. That produces a lot of awful code, even if the individual programmers are capable. Most of the things that would improve code quality (and, in the long term, the health and performance of the business) are things that management won’t let the programmers have: more competitive salaries, more autonomy, longer timeframes, time for refactoring. The only thing that management and the engineers can agree on is firing (or demoting, because their work is often still in use and The Business needs someone who understands it) those who wrote bad code in the past.

One thing I’ve noticed is that technology companies do a horrible job of internal promotion. Why is that? Because launching anything will typically involve compromises with the business on timeframe and headcount, resulting in bad code. Any internal candidate for a promotion has left too many angles for attack. Somewhere out there, someone dislikes a line of code he wrote (or, if he’s a manager, something about a project he oversaw). Unsullied external candidates win, because no one can say anything bad about them. Hence, programming has the culture of mandatory (but, still, somewhat stigmatized) job hopping we know and love.

What’s really at the heart of angry programmers and their raging against all that low-quality code? Dishonest attribution. The programmer can’t do shit about the dickhead executive who set the unreasonable deadlines, or the penny-pinching asswipe managers who wouldn’t allow enough salary to hire anyone good. Nor can he do much about the product managers or “architects” who sit above and make his life hell on a daily basis. But he can attack Tom, his same-rank colleague, over that commit that really should have been split into two. Because they’re socially unskilled and will generally gleefully swallow whatever ration of shit is fed to them by management, most programmers can very easily be made to blame each other for “bad code” before blaming the management that required them to use the bad code in the first place.

Losers

As a group, software engineers are losers. In this usage, I’m not using the MacLeod definition (which is more nuanced) and my usage is halfway pejorative. I generally dislike calling someone a loser, because the pejorative, colloquial meaning of that word conflates unfortunate circumstance (one who loses) with deserved failure. Here, however, it applies. Why do we lose? Because we play against each other, instead of working together to beat the outside world. As a group, we create our own source of loss.

Often, we engage in zero- or negative-sum plays just to beat the other guy. It’s stupid. It’s why we can’t have nice things. We slug each other in the office and wonder why external hires get placed over us. We get into flamewars about minutiae of programming languages, spread FUD, and eventually some snot-nosed dipshit gets the “brilliant” idea to invite nontechnical management to weigh in. The end result is that The Business comes in, mushroom stamps all participants, and says, “Everything has to be Java“.

Part of the problem is that we’re too honest, and we impute honesty in others when it isn’t there. We actually believe in the corporate meritocracy. When executives claim that “low performers” are more of a threat to the company than their astronomical, undeserved salaries and their doomed-from-the-start pet projects, programmers are the only people stupid enough to believe them, and will often gleefully implement those “performance-based” witch hunts that bankers would be smart enough to evade (by looking for better jobs, and arranging for axes to fall on people planning exits anyway). Programmers attempt to be apolitical, but that ends up being very political, because the stance of not getting political means that one accepts the status quo. That’s radically conservative, whether one admits it or not.

Of course, the bankers and traders realize the necessity of appearing to speak from a stance of professional apolitical-ness. Every corporation claims itself to be an apolitical meritocracy, and it’s not socially acceptable to admit otherwise. Only a software engineer would believe in that nonsense. Programmers hear “Tom’s not delivering” or “Andrea’s not a team player” and conceive of it as an objective fact, failing to recognize that, 99% of the time, it means absolutely nothing more or less than “I don’t like that person”.

Because we’re so easily swayed, misled, and divided, The Business can very easily take advantage of us. So, of course, it does. It knows that we’ll sell each other out for even a chance at a seat at the table. I know a software engineer who committed felony perjury against his colleagues just to get a middle-management position and the right to sit in on a couple of investor meetings. Given that this is how little we respect each other, ourselves, and our work, is it any wonder that software engineers have such low status?

Our gender issues

I’m going to talk, just briefly, about our issues with women. Whatever the ultimate cause of our lack of gender diversity– possibly sexism, possibly that the career ain’t so great– it’s a major indictment of us. My best guess? I think sexism is a part of it, but I think that most of it is general hostility. Women often enter programming and find their colleagues hostile, arrogant, and condescending. They attribute that to their gender, and I’m sure that it’s a small factor, but men experience all of that nonsense as well. To call it “professional hazing” would be too kind. There’s often nothing professional about it. I’ve dealt with rotten personalities, fanaticism about technical preference or style, and condescension and, honestly, don’t think there’s a programmer out there who hasn’t. When you get into private-sector technology, one of the first things you learn is that it’s full of assholes, especially at higher levels.

Women who are brave enough to get into this unfriendly industry take a look and, I would argue, most decide that it’s not worth it to put up with the bullshit. Law and medicine offer higher pay and status, more job security, fewer obnoxious colleagues, and enough professional structure in place that the guy who cracks rape jokes at work isn’t retained just because he’s a “rockstar ninja”.

“I thought we were the good guys?”

I’ve often written from a perspective that makes me seem pro-tech. Originally, I approached the satirical MacLeod pyramid with the belief that “Technocrat” should be used to distinguish positive high-performers (apart from Sociopaths). I’ve talked about how we are a colonized people, as technologists. It might seem that I’m making businesspeople out to be “the bad guys” and treating programmers as “the good guys”. Often, I’m biased in that very direction. But I also have to be objective. There are good business people out there, obviously. (They’re just rare in Silicon Valley, and I’ll get to that.) Likewise, software engineers aren’t all great people, either. I don’t think either “tribe” has a monopoly on moral superiority. As in Lost, “we’re the good guys” doesn’t mean much.

We do get the worst (in terms of ethics and competence) of the management/business tribe in the startup world. That’s been discussed at length, in the essay linked above. The people who run Silicon Valley aren’t technologists or “nerds” but machiavellian businessmen who’ve swooped in to the Valley to take advantage of said nerds. The appeal of the Valley, for the venture capitalists and non-technical bro executives who run it, isn’t technology or the creation of value, but the unparalleled opportunity to take advantage of too-smart, earnest hard workers (often foreign) who are so competent technically that they often unintentionally generate value, but don’t know the first thing about how to fight for their own interests.

It’s easy to think ourselves morally superior, just because the specific subset of business people who end up in our game tends to be the worst of that crowd. It’s also a trap. We have a lot to learn form the traders and bankers of the world about how to defend ourselves politically, how to stand a chance of capturing some of the value we create, and how to prevent ourselves from being robbed blind by people who may have lower IQs, but have been hacking humans for longer than we could have possibly been using computers. Besides, we’re not all good. Many of us aren’t much better than our non-technical overlords. Plenty of software engineers would gladly join the bad guys if invited to their table. The Valley is full of turncoat software engineers who don’t give a shit about the greater mission of technology (using knowledge to make peoples’ lives better) and who’d gladly sell their colleagues out to cost-cutting assholes in management.

Then there are the losers. Losers aren’t “the bad guys”. They don’t have the focus or originality that would enable them to pull off anything complicated. Their preferred sin is typically sloth. They’ll fail you when you need them the most, and that ‘s what makes them infuriating. They just want to put their heads down and work, and the problem is that they can’t be trusted to “get political” when that’s exactly what’s needed. The danger of losers is in numbers. The problem is that so many software engineers are clueless, willing losers who’ll gladly let political operators take everything from them.

When you’re young and don’t know any better, one of the appeals of software engineering is that it appears, superficially, to tolerate people of low social ability. To people used to artificial competition against their peers, this seems like an attractive trait of the industry; it’s not full of those “smooth assholes” and “alpha jocks”. After several years observing various industries, I’ve come to the conclusion that this attitude is not merely misguided, but counterproductive. You want socially skilled colleagues. Being the biggest fish in a small pond just means that there are no big fish to protect you when the sharks come in. Most of those “alpha jocks” aren’t assholes or idiots (talk to them, nerds; you’ll be surprised) and, when The Business comes in and is looking for a fight, it’s always best to have strong colleagues who’ve got your back.

Here’s an alternate, and quite possible hypothesis: maybe The Business isn’t actually full of bad guys. One thing that I’ve realized is that people tend to push blame upward. For example, the reputation of venture capitalists has been harmed by founders blaming “the VCs” for their own greed and mismanagement. It gives the grunt workers an external enemy, and the clueless can be tricked into working harder than they should (“they don’t really like us and haven’t given us much, but if we kill it on this project and prove them wrong, maybe they’ll change their minds!”). It actually often seems that most of the awfulness of the software industry doesn’t come directly from The Business, but from turncoat engineers (and ex-engineers) trying to impress The Business. In the same way that young gang members are more prone to violence than elder dons, the most creative forms of evil seem to come from ex-programmers who’ve changed their colors.

The common enemy

So long as software engineers can easily be divided against each other on trivial matters like tabs versus spaces and scrotum versus kanban, we’ll never get the respect (and, more importantly, the compensation) that we’re due. These issues distract us from what we really need to do, which is figure out how to work The Business. Clawing at each other, each trying to become the favored harem queen of the capitalist, is suboptimal compared to the higher goal of getting out of the harem.

I’ve spoken of “The Business” as if it were a faceless, malevolent entity. It might sound like I’m anti-business, and I’m not. Business is just a kind of process. Good people, and bad people, start businesses and some add great value to the world. The enemy isn’t private enterprise itself, but the short-term thinking and harem-queen politics of the established corporation. Business organizations get to a point where they cease having a real reason to exist, and all that’s left is the degenerate social contest for high-ranking positions. We, as programmers, seem to lack the skill to prevent that style of closed-allocation degeneracy from happening. In fact, we seem to unintentionally encourage it.

The evil isn’t that software is a business, but that technical excellence has long since been subordinated entirely to the effectively random emotional ups and downs of non-technical executives who lack the ability to evaluate our work. It’s that our weird ideology of “never get political” is actually intensely political and renders us easy to abuse. Business naturally seems to be at risk of anti-intellectual tendencies and, rather than fight back against this process, we’ve amplified it just to enjoy the illusion of being on the inside, among the “cool kids”, part of The Business. Not only does our lack of will to fight for our own interests leave us at the mercy of more skilled business operators, but it attracts an especially bad kind of them. Most business people, actually, aren’t the sorts of corporate assholes we’re used to seeing run companies. It’s just that our lack of social skill appeals to the worst of that set: people who come in to technology to take advantage of all the clueless, loser nerds who won’t fight for themselves. If we forced ourselves to be more discerning judges of character, and started focusing on ethics and creativity instead of fucking tabs-versus-spaces, we might attract a better sort of business person, and have an industry where stack ranking and bastardized-”Agile” micromanagement aren’t even considered.

If we want to improve our situation, we have to do the “unthinkable” (which is, as I’ve argued, actually quite thinkable). We have to get political.

Why corporate conformity doesn’t work

Narcissism and conformism seem, at first glance, to be somewhat opposite of each other. A narcissistic person believes deeply in his own superiority: others are inferior, detestable, and exist to be used toward his own ends. Narcissists demand attention and adoration, and a continual recognition by the group in which they reside that they’re a cut above. If they can’t lead a group, because it won’t let them, they’ll sabotage it to prove (to themselves, if nothing else) that they were smarter all along. When in a leadership position, they’re typically bad at it, much more focused on “managing up”– that is, appealing to the higher-ranking and more successful narcissists above them– than truly leading the team. It’s not surprising that peoples’ narcissistic colors break out in the corporate world, in which invisible differences between people can produce order-of-magnitude differences is remuneration, division of labor, and respect. Most white collar workers secretly believe, like the narcissist but for different reasons, “I’m better than this job.”

Advocatus Diaboli wrote beautifully on this topic:

The most important difference between blue-collar and white-collar workers is not about differences in levels of formal education, artistic tastes or social attitudes. [It is about] how they see their peers. Blue-collar types tend see their peers as colleagues (good or bad) who are in the same boat they are in. White-collar types see their peers as life-long adversaries who do not belong in the same boat they are in. Some also believe that they “really” belong to a much more exclusive boat and were just plain unlucky to land in their one they are in. (Emphasis mine.)

I’ll get back to that contention, held by many, and (arguably) true for many. Most institutionalized working people are stuck in roles far below their capability. The “I’m too good for these people” contention is pure narcissism, devoid of value or truth. Most people, by definition, are average relative to the groups in which they reside. On the other hand, “I’m too good for this job” is, for many, an accurate reflection of reality. They’re being asked to do things that could be done with far less training, skill, and natural ability. That is, also, an uneasy place to be. People who are overqualified for their jobs can be replaced by (or, worse, surrounded by and eventually answering to) sloppier, less skilled, and cheaper workers. They’re more likely to see their conditions decline (as their positions are eliminated, commoditized, and consolidated) than ever to be recognized (unless they change companies) as built for better things.

Corporate conformity, on the other hand, appears superficially to be a denial of that narcissism. The corporate conformist’s modus operandi is to eliminate even the slightest suspicion of narcissistic stirrings. To distinguish oneself in any way is detrimental. Being the laziest person on the team is deadly, but so is being the hardest worker. Being the office liberal or office conservative or office Christian or office atheist is yet another way to ensure that promotion never happens. It’s not about what one’s political views are. Even when many people agree with him, the office liberal showed arrogance by thinking that his views matter and should be heard. Differentiating oneself should only be done in the blandest way. Even travel can be off-limits: going on more interesting vacations than one’s colleagues or superiors should not be talked about. Many young people attempt to cover gaps in employment with “world travel” and that’s a terrible strategy. If you’re going to lie to cover a work gap, use a painful, trying experience like a failed startup or a resolved health problem instead of travel, which induces resentment. No one envies mononucleosis.

Teamism

Corporate conformity doesn’t demand self-effacing retreat. In fact, the people who never speak up are just as likely to be sidelined as those who speak up in the wrong way. What it does require is adherence to a certain cult: teamism. One might notice that “team” is an overused and abused word in business. Executives call themselves “the leadership team” (gag!) in a public denial of what they actually are: an unprogrammed assortment of the most successful social climbers, still prone to (un-”team”-like) in-fighting.

Furthermore, the terminology of being “on” a team has its own interesting double-speak. At the bottom, team membership is discussed as factual organizational placement: which double-digit-numbered page of the org chart one’s name is written on. “Oh yeah, I’m on Tom’s team.” Executives and upper-tier managers use “on a team” to mean something different– undistinguished, mediocre, unproven, or just unlucky. As in, “if I fuck up this presentation, I might end up on a team in my next job”. The hideous, irrelevant truism, “there is no I in ‘team’” shows an understanding of how the business world actually operates. There are high-flying fighter pilots (I’s) with established personal brands. Recruiters know them by name, and great jobs come to them. Then there are teams which house the mediocre, commoditized losers who sit at the bottom and justify their petty salaries by picking up work that no one else wants to do.

Why teamism? Why is it so ubiquitous? Is it effective? (Yes, but to what end?) The answer is that American corporate life has experienced three fundamental phases of development, each corresponding to a position in the fundamental “What is human nature?” debate.  The first, which peaked in the Gilded Age, is what we call the “Theory X” view of management. Theory X holds that people (in particular, employees) are fundamentally dishonest, lazy, and selfish. This is the Hobbesian “human nature is evil” stance. A Theory X manager must intimidate his workers, lest they steal from him or slack. Beginning around 1925, the more progressive industrialists (such as Henry Ford) began to realize that this wasn’t entirely true. Theory X indicates that it’s most efficient to dominate people totally. But, empirically, shortening work hours increased productivity, and increasing wages resulted in both higher morale and more commercial success (because people could afford what was produced). At least relative to the frank indecency of Gilded Age management, human decency was proven to be good business.

From the late 1940s to the 1970s, Theory Y (“human nature is good”) dominated. Under Theory Y, workers are naturally self-motivated and creative, unless corrupted by bad management or intimidated into mediocrity. The Theory Y manager’s job is to remove obstacles and let the people below her create. People who are trusted, for the most part, will end up deserving it. Theory Y sounds wonderful; every workplace should be like that, no? So what killed it off? The culprit was the “elitism is sexy again” mentality that re-emerged in the Reagan Era (1980s). Theory Y was true enough when socioeconomic inequality was at an all-time low. For most workers, there wasn’t enough at stake to justify harming their employers. Doing something harmful, that would damage others’ careers or harm business operations, wasn’t worth it just to get a promotion that brought a 20% raise. People were probably just as narcissistic in the Theory Y heyday as they are now, but a 20% pay bump doesn’t give enough social distance for one to get away from the long-term reputation risks involved in harmful behavior. Change that raise to 500 percent, and it’s a different story. The narcissist is more empowered by the new calculus (“if I succeed, I’ll get away from these losers forever”). In 1965, there wasn’t as much to gain through bad behavior at work in 1985, when selling out an employer’s secrets to the nearest private equity firm got a person a job that paid in one month what the previous job paid out over a year.

The Theory Y workplace was trusting, open, and mutually altruistic and it could be, because external social inequality was at an all-time low. Selfish and bad behavior certainly has always been with us, and so it certainly happened even in the Theory-Y heyday, but there wasn’t the epidemic of it that could threaten a company’s existence. That changed in the 1980s, because the external stakes were so much higher. Workplaces had to become secretive, distrusting, and somewhat ruthless again. Bosses who weren’t feared had their careers ruined by a rising cohort of Boomer yuppies, and ceased to be bosses.

Theory X was driven by simple human greed, but its moral support came from  an “original sin” mentality, one that the secularism of the mid-20th century discarded. Calvinism held that work was, literally, a punishment for The Fall. That went out of style, and good riddance. Theory Y, however, proved itself to be too optimistic about human nature and what people are. By 1985, we’d seen as a society that people can be highly creative, industrious, and even altruistic with only moderate reward. The successful moon landing was executed not by billionaires, but by men and women who loved the work. So we’d seen some impressive Theory-Y victories. You simply wouldn’t be able to get something like the Apollo program with Theory-X management. We’d also seen that Theory Y’s optimism (echoes of which still exist in the culture of Silicon Valley, despite extreme Theory-X behavior at the top) didn’t have the whole picture. Given a sweet enough carrot, some people would do the wrong thing, and while it might not be that most people would, we’d seen that there are enough such people to present an existential threat to a business– or, at least, to a too-trusting executive’s career. We had to invent something new. If Theory X was the original-sin thesis, Theory Y was its antithesis; the synthesis became what I call “Theory Z”, or the cult of teamism.

Is human nature good, or evil? For a simplification, let’s identify “good” with altruism and “evil” with militant selfishness (egoism). Few people are degenerately egoistic, but even fewer are universally altruistic. Most people are localistic. They do care about people and things beyond themselves: their families, their physical neighborhoods, their companies and nations, and so on. People view themselves at the center of a nested collection of neighborhoods and care about the closest and smallest ones the most. The greater the distance (social, tribal, or physical), the less they care. None of this is surprising, and what it tells us is that human nature isn’t prevailingly “good” or “evil”. It’s somewhere in between, for most of us. In the corporate context, this predicts that peoples’ “corporate altruism” should be strong in a small company and weaker in a large one, and we see that to be true. It tells us that people would rather delegate undesirable work to a remote office (especially in a foreign country) than burden their officemates. That, we also see. The social and emotional bonds that are relevant tend to be formed over time through shared experience and physical co-presence. While Theory X motivates by intimidation, and Theory Y believes people are intrinsically rightly motivated so long as management doesn’t corrupt them, Theory Z attempts to harness team cohesion: bonds formed by physical closeness as well as shared experience (and suffering). Theory X was obsessed with the egoistic human, and Theory Y believed in a fundamental altruist; Theory Z is a practical (but intensely manipulative) approach focused on localism. The Theory Z manager recognizes that the individual worker doesn’t give a damn about the company as a whole (and, since Theory Z is closer to X than Y, most companies aren’t worth caring about, from a worker’s perspective) but is willing to bet that he won’t fuck over his buddies.

Mike Cohn explains it well in the software context, with this short blog post, “Sssh… Agile Is All About Micromanaging.” Those blessed enough not to be familiar with the cult that has been made of “Agile” can still learn much from this revelation.

[T]he deep, dark secret of agile: It’s all about micromanagement. Almost every principle and practice of agile is there to support micromangagement.

  • The daily scrum is about micro-managing the team’s daily work plans and making sure that everyone is doing what they say they’ll do. […]
  • Pair programming is about making sure that programmers don’t lose focus, don’t goldplate, don’t work on only the fun stuff, and that they clean things up.

Ah, but who is it that is doing this micromanagement? It’s the team.

The purpose of the Theory Z teamism is to replace one boss five hundred feet away with ten bosses twenty feet away. It’s to diffuse responsibility when people are rejected (fired). The official manager can deflect responsibility by claiming “the team” discarded the unwanted employee. It also makes it easier for people to play political games while remaining vague in whom they are attacking. Instead of discussing specific people, they can say “the tech team is weak” and (in truth) target specific people (possibly the CTO, possibly the specific person on that team related to a matter) with plausible deniability. The purpose of Theory Z teamism is to make Theory X (micromanagement, prevailing distrust, executive greed) look like Theory Y (commitments and “consensus”). In a world that has outgrown top-down religion (Theory X) but found secular humanism (Theory Y) toothless, local microcults are the new rage. Theory Z encourages management to tailor microcults to specific corners of the company. The cloying, common theme within and between these microcults is team. The executive suite is “the leadership team”. HR won’t let you call a disliked employee a “shithead”, so you call him “not a team player”. It enables the upper management (still inclined to Theory X thinking) to hide the true dynamic of the relationship between (exploited) employee and (rent-seeking) organization by redirecting the focus to employee and “team”. You wouldn’t drop the ball on your team, Mac?

If my negativity about teamism makes it sound like I’m “anti-team”, that’s not the impression that I’m trying to convey. When an actual team synergy exists, it’s great for everyone. It’s more fun to be on a winning team than to win alone. All that said, the truth of most organizations is that there are no winning teams. The winners are executives, ace fighter pilots, and proteges who get to move about the company as themselves. It’s the rest, the non-Elect losers, who are “on a team”. The corporate world is one in which the winners interoperate with multiple teams as they choose, rather than being stuck at one table in an assigned seat. Of course, the executives still call themselves “the leadership team”, but that’s just how they market themselves internally within the organization. They aren’t a team in any meaningful sense. They’re out for themselves, and they wouldn’t be executives if they were any other way. Those who are “on a team” are the ones who don’t have any independent credibility, but who serve at the mercy of parochial “team leaders” (middle managers). Theory Z isn’t about teamwork. It’s about corralling the disaffected losers that a company still needs and saying, “be a team, now!”

My issue, then, isn’t with teamwork or genuine team formation, because those aren’t what Theory-Z teamism is. Teamism is forced team identity. Its seed mythology is that those who have been slotted by fortune (or misfortune) to answer to the same middle manager constitute a “team” in any meaningful sense. It lends false objectivity (“not a team player”) to the language used to denigrate and discard those who awaken and realize that the corporate gods don’t exist. Finally, it glorifies mediocrity and slave mentality, by applying terminology with positive associations (such as genuine teams that achieve things that would be extremely difficult, if not impossible, for an individual) to the unfortunate, miserable state of being at the bottom of an organization.

Counter-narcissism

Teamism (“team unity”) is the justification for the extreme conformity that the corporate environment demands. When a person has multiple bosses, with new dotted lines forming at all times, there isn’t room for self-expression, and the optimal strategy is to be as bland and average as one can be, except in short bursts of targeted activity intended toward a specific promotion. (Failure often results in termination, so one must be prepared for that.) To stand out is taken, implicitly, to be a personal statement of, “I don’t have to follow your rules”, which is interpreted as “I’m too good to follow your rules”. While narcissism is tacitly accepted in the executive ranks– at a high enough level, they’re all narcissists or they wouldn’t be there– even a whiff of narcissism is viewed as toxic when it appears “on the team”. Teamism, then, is a militant anti-narcissism. It seeks out and punishes those who think they’re too good for their jobs, or who just seem to think so. As a side effect, this also punishes excellence, because people who do their jobs uncannily well are going to appear to be up to something (Theory X). Teamism is great at inducing uniformity and reliable mediocrity, and quite successful on its own terms, but it does a terrible job of encouraging people to perform beyond the Socially Acceptable Middling Effort, or the SAME.

In the short term, teamism does a fantastic job of getting what the executives want, which is for people to work hard under an assumed “social contract” with the team. The thing is that “the team” has no power. The organization can break the social contract at any time, and argue that it never existed. Over the long term, this leads inexorably to corporate degeneration. (Executives know this, but tolerate it because they’ll be promoted away from their posts before it’s a personal issue for them.) Teamism encourages people to target their effort levels toward the SAME. People who are talented can usually achieve the SAME and have energy to spare, and will find their way to better places. (This may have them promoted into upper management, or moved to better teams, or externally promoted into another company; or, it might get them isolated, rejected, and fired. Either way, the result is the same: they leave.) This has an “evaporative” effect: the more competent people leave, and the less able stay. Underperformers gradually push the SAME downward, it being safer to slightly underperform than overperform in most organizations. The SAME drifts, slowly but inevitably, toward zero. After a while, upper management will take note, but by this time, it’s often too late to do anything about it, and the remedies that do exist are toxic ones that don’t work. Executives might attempt to institute stack ranking, for example, to scare people back into working. This, however, re-awakens the narcissism and political machination that the teamism was invented to tamp down.

What is the corporate value of teamism? Why do contemporary corporate executives favor it over the management-by-fear Theory-X style, or the permissive altruism of the Theory-Y school? The short answer is that Theory Z teamism is Theory X with Theory Y trappings. It allows organizations to behave in a Theory X way (rent-seeking, throwing loyal employees overboard for any cause) while encouraging the worker to focus locally, on people in the same boat, whom one is inclined to empathize with. The longer answer is that teamism is better at exploiting cognitive dissonance as well as guilt. Teamism is aggressive counter-narcissism, its purpose being to inculcate people with the belief that they aren’t too good for their shitty jobs. If they sit on teams, and they see similarly talented people underutilized on low-quality work, it strengthens the executive case. Abstractly, most white-collar workers think (or, I would argue, know) that they’re good for what they’re asked to do. When they go into an office and see others suffering just as much as they are, it’s much harder to hold that view, because (although I disagree with this reasoning) it equates the thought, “I am too good for this job”, with “I am better than him, that guy doing an equivalently crappy job”. Some people think that way about those they sit next to on a daily basis, but most people don’t like thinking that way. Hence, cognitive dissonance sets in. If people become used to the sight of highly qualified people in humiliating, subordinate roles doing menial work, they’re likely to accept that situation for themselves.

The business view

What, pray tell, do executives gain from this? After all, isn’t it a business loss to underemploy people? On paper, it might be. Potential revenue (opportunity cost) is squandered when highly-qualified people are assigned to low-quality work. However, the profit-maximizing organization is a fictitious person. It doesn’t really exist, insofar as it cannot implement its will. For that, it relies on executives, individuals who’d rather hold a high degree of control in a malfunctioning organization than risk that control to improve it. The executive doesn’t give a damn about the organization’s profitability or long-term health. He only cares about the effect of those variables on his career, and he realizes that his needs are best served by keeping the people around him loyal. Knowing this, he often benefits more by hoarding control than anything else. A false scarcity in the allocation of important or desirable work is a powerful tool. Giving some of that power up (say, by implementing open allocation) might make the organization better and more successful, but it won’t make it easily controllable.

There is, at root, a fundamental conflict of interest between “the business” (and its desire to maximize profit, revenue, or subjective health) and the executives who actually control it. It is best for the business that all employees have the chance to contribute as usefully as their talents allow, but it’s best for executives to keep important work assigned only to proven loyalists who, even should they outperform their executive patrons, will never challenge the position of the ones who lifted them.

False scarcity

The reality, for most white-collar workers, is that their narcissistic impulse isn’t entirely wrong. Most of them are too qualified for their positions. To understand why I can assert this as if it were an objective fact, let’s examine the nature of a subordinate organizational role. To say that someone is “too good” for a specific task is a bit offensive and not especially defensible. I have cats, I clean their litter box, and I’m not “too good” for that job because someone has to do it. From first principles, the fact that work is unpleasant and menial doesn’t mean that a talented person should be “too good” for it. (People who think otherwise are likely to be actual narcissists.) So the positive definitions of a role (i.e. the things one is expected to do) don’t make a job “beneath” a talented person. So I won’t focus on unpleasant duties. Rather, I’ll focus on the negative definitions associated with a subordinate role, or the do-nots. Don’t attempt cultivate a relationship with anyone above your manager. Don’t work on things you weren’t explicitly assigned to do. Don’t speak “off script” in the presence of important people. These prohibitions (and not specific undesired tasks) are the causes that give a person of even moderate talent the justification in believing that he’s simply too good for the role.

I’d argue that most people are, factually, too good for their jobs. As I’ve said above, I don’t think anyone is too good to do an unpleasant task if it must be done. The truth is, however, that most of white-collar work isn’t about performing necessary tasks or about producing anything. It’s about managing perceptions, helping one parochial warlord beat out another, and appearing subordinate enough to (a) please one’s immediate manager, and (b) present a positive image of that manager to his superiors. Most white-collar office workers have to be present for 8 (or more) hours each day not because there’s that much work (there usually isn’t) but because working fewer hours would present the image that their bosses can’t control their charges. For the white-collar worker, most of one’s “work” time is spent supporting authority through sacrifice (most visibly, of time) rather than producing anything real. Most of the stress doesn’t come from the tasks to be performed, but from the chronic job of presenting oneself in a way that one can acquire and maintain permission to do meaningful work, which is kept in short supply by the parasites (executives) who define and allocate it. Most of the work that is done is just there to keep up appearances in an organization that could do just fine without half its people, but (luckily for them) is constitutionally incapable of figuring out which half.

When talented people realize their real jobs aren’t to produce but to subordinate, they conclude (accurately): I’m too good for this bullshit. And they’re right, all of them. They deserve better. That’s not narcissism. It’s accurate self-perception within an institutional prison that shouldn’t exist.

If people awakened to this at once, and collectively, it could spell the end of the current corporate system. What has to happen, to prevent that, is to single out those who awaken and shame them as narcissists. Closed allocation systems work exactly to that purpose. One who puts himself out there by (usually unwisely) suggesting he’d be more useful to the company doing something else can then be interpreted as acting as if he’s too good for his team and immediate manager. This makes him disliked and will have him rejected by the team (and fired) in time, intimidating those who remain. Toward the executives’ goal of maintaining control at all costs (even when it harms or may destroy the company) it is brilliant, because even the slightest internal assertions are penalized automatically. Theory Z teamism is perfect from a parasitic executive’s perspective. Most people are intelligent enough to distrust and dislike corporate executives, even within their own companies. Few people are stupid enough to overlook the fact that at least 80 percent of these high priests called “executives” are overpaid, pampered, worthless parasites. The result is that anti-executive sentiment (possibly leading to unionization, which would threaten management’s power and profits) would spread quickly if people weren’t inculcated into a sort of cultish, corporate religion. The brilliance of Theory Z is that it has people convinced they are working not for upper management (which they, rationally and rightly, couldn’t give a damn about) but their immediate team.

The result is a punitive, miserable system in which even the slightest self-assertions– even normal human impulses– are treated as arrogant narcissism. (Many religions and most cults use the same dynamic; not to believe certain improbable claims is made into rejecting one’s community.) Few people will take that social risk, and the result is an extreme conformism.

The twist

If a sort of militant anti-individualism, presented as anti-narcissism, takes hold in an organization, extreme conformity will result. Perhaps surprisingly, most people are unaware that it has happened. “My company isn’t like that.” Sorry, but it probably is. Unless you have direct on-the-spot responsibilities to important customers, showing up at specific times, regardless of whether there is work to do, is conformity. Working only on assigned projects, or only on projects assigned to a specific subcorner (sorry, I mean “team”) of the company, is conformity. Spending 8 hours per day in a state of low-level social anxiety (the long-term health risks of which are poorly publicized) not because it produces useful work, but to uphold a power relationship, is conformity. I think, sadly, that I’m accurate in arguing that over 90 percent of American workplaces are conformist hellholes that destroy creativity and squander (abuse, even) talent. Some may think that technology companies or VC-funded startups might provide a way out but, empirically, those are some of the worst in this regard.

So what about narcissism and its purported antithesis of “being a team player”? Ultraconformist workplaces might be undesirable, but shouldn’t one agree that narcissism is a bad (and, to a business, dangerous) thing? Might it be worth it to suffer a bit of conformity if the negative effects of the true narcissist are curtailed? Don’t the people at the bottom need to learn, anyway, that they aren’t special snowflakes?

Reality intrudes. Here’s the thing about conformity: it might seem like an antidote for narcissism, but it needs to be enforced. By whom? Who wants the role of conformity’s Enforcer? Generally, such people turn out to be narcissists, those who arrogate the role of speaking for a large group (as large can they can get) because of the power it commands. Narcissists, of course, love power, and have the deepest understanding of the impulses (narcissistic and otherwise) that impel others to compete with them for it. The result is that, the more conformist an organization’s culture is, the more power that organization has already given away to true narcissists.

This gets to the heart of what I’ve taken to calling the Organizational Problem. Simply put, organizations cannot be stable because they rely on people to keep them up, and because the power associated with upkeep often attracts the worst kinds. Organizations have a justified fear and dislike of the true narcissist, because such people are truly toxic when in positions of power. What they are unable to prevent, seemingly without exception, is the ability of the toxic narcissist to gain entry into whatever suborganization (be it management or an official “culture police”, as some startups have) it relies on to spot and kick out narcissists. Psychopaths truly are the cancer cells of the human organization, the fittest ones not only able to elude the immune system, but often capable of redirecting it against healthy cells.

The case for corporate conformity is that it blocks the advancement of narcissists, who supposedly can’t thrive in a conformist environment. The (completely wrong) assumption is that, because the conformist environment denies individual expression (much less admiration) the psychopath or narcissist will be unable to function in it. The reality is that narcissists (and especially psychopaths) love conformist environments. The slightly-narcissistic normal person sees the corporate conformity– the rules and expectations it imposes on people– as restraint; but the psychopath sees them as weapons. It’s no surprise that psychopaths like weapons. (Non-psychopathic narcissists do, too, but for different reasons. In general, they prefer to wear but not use the sword.)

The Organizational Problem is so convoluted and deep that I cannot offer a general solution. I wish I could. I’ve tried to find one and, honestly, haven’t been able to come up with anything simple enough to impart in a few thousand words. I don’t think there is a “closed-form” answer. I think the best that we can do, on the ground, is to remove the obstacles that don’t work, on the grounds that they generate social complexity that will, in general, benefit the narcissist and the psychopath. The first step for us, all of us, might be to accept our basic humanity and reject the toxic conformity that seems to settle, if unopposed, in the corporate world.

What’s a mid-career software engineer actually worth? Try $779,000 per year as a lower bound.

Currently, people who either have bad intentions or a lack of knowledge are claiming that software engineer salaries are “ridiculous”. Now, I’ll readily admit that programmers are, relative the general population, quite well paid. I’m not about to complain about the money I make; I’m doing quite well, in a time and society where many people aren’t. The software industry has many problems, but low pay for engineers (at least, for junior and mid-career engineers; senior engineers are underpaid but that’s an issue for another time) doesn’t crack the top 5. Software engineers are underpaid, relative to the massive amount of value (if given proper projects, rather than mismanaged as is typical) they are capable of delivering. In comparison to the rest of the society, they do quite well.

So what should a software engineer be paid? There’s a wide disparity in skill level, so it’s hard to say. I’m going to focus on a competent, mid-career engineer. This is someone between 5 and 10 years of experience, with continual investment in skill, and probably around 1.6 on this software engineering scale. He’s not a hack or the stereotypical “5:01″ programmer who stopped learning new skills at 24, but he’s not a celebrity either. He’s good and persistent and experienced, but probably not an expert. In the late 1990s, that person was just cracking into six-figure territory: $100,000 per year. No one thought that number was “ridiculous”. Adjusted for inflation, that’s $142,300 per year today. That’s probably not far off what an engineer at that level actually makes, at least in New York and the Bay Area.

Software engineers look “ridiculous” to people who haven’t been software engineers in 20 years (or ever) and whose numbers are way out of date. If you’re a Baby Boomer whose last line of code was in 1985, you’re probably still thinking that $60,000 is a princely sum for a programmer to earn. When one factors inflation into the equation, programmer salaries are only “at record high” because inflation is an exponential process. Taking that out, they’re right about where history says they should be.

I would argue, even, that programmer salaries are low when taking a historical perspective. The trend is flat, adjusting for inflation, but the jobs are worse. Thirty years ago, programming was an R&D job. Programmers had a lot of autonomy: the kind of autonomy that it takes if one is going to invent C or Unix or the Internet or a new neural network architecture. Programmers controlled how they worked and what they worked on, and either answered to other programmers or to well-read scientists, rather than anti-intellectual businessmen who regard them as cost centers. Historically, companies sincerely committed to their employees’ careers and training. You didn’t have to change jobs every 2 years just to keep getting good projects and stay employable. The nature of the programming job, over the past couple decades, has become more stressful (open-plan offices) and careers have become shorter (ageism). Job volatility (unexpected layoffs and, even, phony “performance-based” firings in lieu of proper layoffs, in order to skimp on severance because that’s “the startup way”) has increased. With all the negatives associated with a programming job in 2014, that just didn’t exist in the 1970s to ’80s, flat performance on the salary curve is disappointing. Finally, salaries in the Bay Area and New York have kept abreast of general inflation, but the costs of living have skyrocketed in those “star cities”, while the economies of the still-affordable second-tier cities have declined. In the 1980s and ’90s, there were more locations in which a person could have a proper career, and that kept housing prices down. In 2014, that $142,000 doesn’t even enable one to buy a house in a place where there are jobs.

All of those factors are subjective, however, so I’ll discard them. We have sufficient data to know that $142,000 for a mid-career programmer is not ridiculous. It’s a lower bound for the business value of a software engineer (in 1999); we know that employers did pay that; they might have been willing to pay more. This information already gives us victory over the assclowns claiming that software engineer salaries are “ridiculous” right now.

Now, I’ll take it a step further and introduce Yannis’s Law: programmer productivity doubles every 6 years. Is it true? I would say that the answer is a resounding “yes”. For sure, there are plenty of mediocre programmers writing buggy, slow websites and abusing Javascript in truly awful ways. On the other hand, there is more recourse for a good programmer who find quality; rather than commit to commercial software, she can peruse the open-source world. There’s no evidence for a broad-based decline in programmer ability over the years. It’s also easy to claim that the software career “isn’t fun anymore” because so much time is spent gluing existing components together, and accounting for failures of legacy systems. I don’t think these gripes are new, and I think tools are improving, and a 12% per year rate sounds about right. Put another way, one who programs exactly as was done in 1999 is only about 18 percent as productive as one using modern tools. And yet that programmer, only 18% as productive as his counterpart today, was worth $142,000 (2014 dollars) back then!

Does this mean that we should throw old tools away (and older programmers under the bus)? Absolutely not. On the contrary, it’s the ability to stand on the shoulders of giants that makes us able to grow (as a class) at such a rate. Improved tools and accumulated knowledge deliver exponential value, but there’s a lot of knowledge that is rarely learned except over a decades-long career. Most fresh Stanford PhDs wouldn’t be able to implement a performant, scalable support vector machine from scratch, although they could recite the theory behind one. Your gray-haired badasses would be rusty on the theory but, with a quick refresh, stand a much greater chance of building it righjt. Moreover, the best old ideas tend to recur and long-standing familiarity is an advantage. The most exciting new programming language right now is Clojure, a Lisp that runs on the Java Virtual Machine. Lisp, as an idea, is over 50 years old. And Clojure simply couldn’t have been designed by a 25-year-old in Palo Alto. For programmers, the general trend is a 12% increase in productivity; but individuals can reliably do 30 percent or more, and for periods spanning over decades.

If the business value of a mid-level programmer in 1999 was $142,000 in today’s dollars, then one can argue that today, with programmers 5.7 times more productive, the true value is $779,000 per year at minimum. It might be more. For the highly competent and for more senior programmers, it certainly is higher. And here’s another thing: investors and managers and VPs of marketing didn’t create that surplus. We did. We are almost 6 times as productive as we were in the 1990s not because they got better at their jobs (they haven’t) but because we built the tools to make ourselves (and our successors) better at what we do. By rights, it’s ours.

Is it reasonable, or realistic, to argue that mid-career software engineers ought to be earning close to a million dollars per year? Probably not. It seems to be inevitable, and also better for society, that productivity gains are shared. We ought to meet in the middle. That we don’t capture all of the value we create is a good thing. It would be awful, for example, if sending an email cost as much as sending a letter by post or, worse yet, as much as using the 19th-century Pony Express, because the producers of progress had captured all of the value for themselves. So, although that $779,000 figure adequately represents the value of a decent mid-career engineer to the business, I wouldn’t go so far as to claim that we “deserve” to be paid that much. Most of us would ecstatic with real equity (not that 0.05% VC-istan bullshit) and a quarter of that number– and with the autonomy to deliver that kind of value.

If you’ll ever die, don’t apply

Some day, I will die. So will everyone I know, everyone who has read this post, and everyone they know. I’m probably more than a third of the way there. I’m not especially afraid of it. Actually, I’m curious. Though I don’t subscribe to any religion or believe in anything resembling an anthropomorphic god or gods, I think it’s more likely than not that something interesting will be on the other side. (If I’m wrong, I won’t know.) All that said, I’m going to die some day, and even before that I’ll experience involuntary change (aging). It’s not an easy thing to forget, and it’s not a thing that I should forget. There’s no virtue in ignorance. And, despite not having any specifically religious faith, I’m pretty sure that what a person does in this life matters. Something in me is convinced that the ultimate reality of human existence is somewhere between the dichotomous nihilisms of materialist reductionism (existence ends fully at death, so all will be erased) and mainstream religion (only the next world matters). Perhaps that is why Buddhism (and its tendency toward middle paths on such questions) appeals to me more than the mainstream interpretations of the Abrahamic faiths (in particular, Christianity). The assertion that believing in the existence of a certain being (who left no evidence of his existence) is the key variable in separating people out for eternal bliss or agony is, in truth, far more nihilistic than most atheistic beliefs.

I’m not a nihilist; whatever this life is, it’s not to be thrown away. I conceive of myself, however, as a realist. I’m not immortal, and even though I haven’t really begun to age (I’m only 30) in any painful or even inconvenient way, I’m increasingly aware of the fact that I will die. People who are young and healthy one year are dead in the next. I’ve seen it happen too many times.

I’ve written at length about Silicon Valley’s perverse bubble culture and its obsession with youth. There isn’t a meaningful physiological difference between a 22-year-old and a 35-year-old that has any business importance. I think, however, I understand what Silicon Valley’s youth obsession is actually about: not age, but immortality. No one is immortal, but some people think they are. They haven’t learned the value and price of time yet. Nothing bad has happened to them yet. They still conceive of themselves as invulnerable.

These venture capitalists don’t just want to fund 22-year-old white males. They fund a specific kind of 22-year-old, white males: people who can trick themselves into living outside of time (and, more practically, throwing their time and health away, often at a pace of 100 hours per week, for someone else’s benefit) because they haven’t been reminded yet, by life, that they very much live in time.

The difference between the 35-year-old and the 22-year-old is not personal health but experience. The 35-year-old has had parents, aunts, and uncles get old, get sick, and die. He’s seen college classmates (and more than an unlucky one or two) lose the cancer lottery and die at a ridiculously young age. He realizes that time brings involuntary, unexpected, and sometimes painful change, and that a year of life spent “paying dues” or enriching ingrates is a permanent loss. Most 22-year-olds haven’t learned those lessons yet, and those who have are unfundable in the current Valley.

The “job hopper” stigma and “team player” nonsense of Corporate America, after all, make sense for people who haven’t figured out yet that their time is finite: that the “technological singularity” might not happen in the next 100 years, that “old people” were once as young as them, that they’ll get old and die and that it will always seem too soon. Those who believe themselves to be immortal are appealing to the exploitative overseers. They value the present at zero, convinced of some superior and unending future (one that will never come, except for the politically sacrificial and lucky). They are (for the moment) timeless and therefore without much memory.

I’ve always feared that if humans became technologically immortal, but if we did not succeed in ending scarcity, the first thing that an elite would do would be to create a “zombie” class of people who (like us mortals) lose most or all memory every hundred years or so, lest they acquire the knowledge that enable them to compete with the existing elite. They would continue to die (in effect) but be biologically maintained as peak-of-their-prime adults (to perform work, for others’ behalf) and probably conceive of themselves as immortal. Obviously, that doesn’t exist yet; the technology isn’t there. But culturally, it’s already happening. Of course, indefinite life extension and memory erasure aren’t there. Instead, the old class of immortals (once they become aware of their own mortality, and seek genuine purpose and value, rather than the enrichment of an ingrate elite, out of their work) is discarded and a new one is put in place.

Silicon Valley is for the immortal. If you’ll ever die, don’t apply.

What Silicon Valley’s ageism means

Computer programming shouldn’t be ageist. After all, it’s a deep discipline with a lot to learn. Peter Norvig says it takes 10 years, but I’d consider that number a minimum for most people. Ten years of high-quality, dedicated practice to the tune of 5-6 hours per day, 250 days per year, might be enough. For most people, it’s going to take longer, because few people can work only on the interesting problems that constitute dedicated practice. The fundamentals (computer science) alone take a few thousand hours of study, and then there’s the experience of programming itself, which one must do in order to learn how to do it well. Getting code to work is easy. Making it efficient, robust, and legible is hard. Then, there’s a panoply of languages, frameworks, paradigms, to learn and absorb and, for many, to reject. As an obstacle, there’s the day-to-day misery of a typical software day job, in which so much time is wasted on politics and meetings and pointless projects that an average engineer is lucky to have 5 hours per week for learning and growth. Ten years might be the ideal; I’d bet that 20 years is typical for the people who actually become great engineers and, sadly, the vast majority of professional programmers never get anywhere close.

It takes a long time to be actually good in software engineering. The precocious are outliers. More typically, people seem to peak after 40, as in all the other high-skill disciplines. It, then, seems that most of age-related decline in this field is externally enforced. Age discrimination is not an artifact of declining ability but changing perceptions.

It doesn’t make sense, but there it is.

Age discrimination has absolutely no place in technology. Yet it exists. After age 40, engineers find it increasingly difficult to get appropriate jobs. Startups are, in theory, supposed to “trade against” the inefficiencies and moral failures of other companies but, on this issue, the venture capital (VC) funded startups are the biggest source of the problem. Youth and inexperience have become virtues, while older people who push back against dysfunction (and, as well, outright exploitation) are cited as “resistant to change”.

There’s another issue that isn’t derived from explicit ageism, but might as well be. Because our colonizers (mainstream business culture) are superficial, they’ve turned programming into a celebrity economy. A programmer has two jobs. In addition to the work itself, which is highly technical and requires continual investment and learning, there’s a full-time reputation-management workload. If a machine learning engineer works at a startup and spends most of his time in operations, he’s at risk of being branded “an ops guy”, and may struggle to get high-quality projects in his specialty from that point on. He hasn’t actually lost anything– in fact, he’s become far more valuable– but the superficial, nontechnical idiots who evaluate us will view him as “rusty” in his specialty and, at the least, exploit his lack of leverage. All because he spent 2 years doing operations, because it needed to be done!

As we get older and more specialized, the employment minefield becomes only more complicated. We are more highly paid at that point, but not by enough of a margin to offset the increasing professional difficulties. Executives cite the complexity of high-end job searches when demanding high salaries and years-long severances. Programmers who are any good face the same, but get none of those protections. I would, in fact, say that any programmer who is at all good needs a private agent, just as actors do. The reputation management component of this career, which is supposed to be about technology and work and making the world better, but is actually about appeasing the nontechnical, drooling patron class, constitutes a full-time job that requires a specialist. Either we need unions, or we need an agent model like Hollywood, or perhaps we need both. That’s another essay, though.

The hypocrisy of the technology employer

Forty years ago, smart people left finance and the mainstream corporate ladder for technology, to move into the emerging R&D-driven guild culture that computing had at the time. Companies like Hewlett-Packard were legitimately progressive in how they treated their talent, and rewarded for it by their employees’ commitment to making great products. In this time, Silicon Valley represented, for the most technically adept people in the middle class, a genuine middle path. The middle path will require its own essay, but what I’m talking about here is a moderate alternative between the extremes of subordination and revolution. Back then, Silicon Valley was the middle path that, four decades later, it is eagerly closing.

Technology is no longer managed by “geeks” who love the work and what it can do, but by the worst kinds of business people who’ve come in to take advantage of said geeks. Upper management in the software industry is, in most cases, far more unethical and brazen than anywhere else. To them, a concentration of talented people who don’t have the inclination or cultural memory that would lead them to fight for themselves (labor unions, agent models, ruthlessness of their own) is an immense resource. Consequently, some of the most disgusting HR practices (e.g. stack ranking, blatant sexism) can be found in the technology industry.

There’s one really bad and technical trait of software employers that, I think, has damaged the industry immensely. Technology employers demand specialties when vetting people for jobs. General intelligence and proven ability to code isn’t enough; one has to have “production experience” in a wide array of technologies invented in the past five years. For all their faults, the previous regime of long-lasting corporations was not so bigoted when it came to past experience, trusting people to learn on the job, as needed. The new regime has no time for training or long-term investment, because all of these companies have been built to be flipped to a greater fool. In spite of their bigoted insistence on pre-existing specialties in hiring, they refuse to respect specialties once people are hired. Individual programmers who attempt to protect their specialties (and, thus, their careers) by refusing assignment to out-of-band or inferior grunt work are quickly fired. This is fundamentally hypocritical. In hiring, software companies refuse to look twice at someone without a yellow brick road of in-specialty accomplishments of increasing scope; yet, once employees are inside and fairly captive (due to the pernicious stigma against changing jobs quickly, even with good reason) they will gladly disregard that specialty, for any reason or no reason. Usually, this is framed as a business need (“we need you to work on this”) but it’s, more often, political and sometimes personal. Moving talent out of its specialty is a great way for insecure middle managers to neutralize overperformance threats. In a way, employers are like the pervert who chases far-too-young sexual partners (if “partner” is the right word here) for their innocence, simply to experience the thrill of destroying it. They want people who are unspoiled by the mediocrity and negativity of the corporate world, because they want to inflict the spoilage. The virginity of a fresh, not-yet-cynical graduate from a prestigious university is something they want all for themselves.

The depression factor

I’m not going to get personal here, but I’m bipolar so when I use words like “depression” and “hypomania” and “anxiety” I do, in fact, know what the fuck I am talking about.

A side effect of corporate capitalism, that I see, is that it has created a silent epidemic of middle-aged depression. The going assumption in technology that mental ability declines after age 25 is not well supported, and it is in fact contrary to what most cultures believe about intelligence and age. (In truth, various aspects of cognition peak at different ages– from language acquisition at age 5 to writing ability around 65– and there’s also so much individual variation that there’s no clear “peak” age.) For general, holistic intelligence, there’s no evidence of an age-bound peak in healthy people. While this risks sounding like a “No True Scotsman” claim, what I mean to say is that every meaningful age-related decline in cognition can be tracked to a physical health problem and not aging itself. Cardiovascular problems, physical pain and side effects of medication can impair cognition. I’m going to talk about the Big One, though, and that’s depression. Depression can cause cognitive decline. Most of that loss is reversible, but only if the person recovers from it and, in many cases, they never do.

In this case, I’m not talking about severe depression, the kind that would have a person considering electroconvulsive therapy or on suicide watch. I’m talking about mild depression that, depending on time of diagnosis, might be considered subclinical. People experiencing it in middle age are, one presumes, liable to attribute it to getting older rather than a real health problem. Given that middle-aged “invisibility” in youth-obsessed careers is, in truth, unjust and depressing, it seems likely that more than a few people would experience depression and fail to perceive it as a health issue. That’s one danger of depression that those who’ve never experienced it might not realize exists: when you’re depressed, you suffer from the delusion that (a) you’ve always been depressed, and (b) that no other outlook or mood makes sense. Depression is delusional, and it is a genuine illness, but it’s also dangerously self-consistent.

Contrary to the stereotype, people with depression aren’t always unhappy. In fact, people with mild depression can be happy quite often. It’s just easier to make them unhappy. Things that don’t faze normal people, like traffic jams or gloomy weather or long queues at the grocery store, are more likely to bother them. For some, there’s a constant but low-grade gloom and tendency to avoid making decisions. Others might experience 23 hours and 50 minutes per day of normal mood and 10 minutes of intense, debilitating, sadness: the kind that would force them to pull over to the side of the road and cry. There isn’t a template and, just as a variety of disparate diseases (some viral, some bacterial, and some behavioral) were once called “fevers”, I feel like “depression” is a cluster of about 20 different diseases that we just don’t have the tools to separate. Some depressions come without external cause. Others are clearly induced by environmental stresses. Some depressions impair cognition and probably constitute a (temporary) 30-IQ-point loss. Others (more commonly seen in artists than in technology workers) seem to induce no intellectual impairment at all; the person is miserable, but as sharp as ever.

Corporate workers do become less sharp, on average, with age. You don’t see that effect, at least not involuntarily so, in most intellectually intense fields. A 45-year-old artist or author or chess master has his best work ahead of him. True entrepreneurs (not dipshits who raise VC based on connections) also seem to peak in their 50s and, for some, even later. Most leaders hit their prime around 60. However, it’s observable that something happens in Corporate America that makes people more bitter, more passive, and slower to act over time, and that it starts around 40. Perhaps it’s an inverse of survivor bias, with the more talented people escaping the corporate racket (by becoming consultants, or entrepreneurs) before middle age. I don’t think so, though. There are plenty of highly talented people in their forties and fifties who’ve been in private-sector programming for a long time and just seem out of gas. I don’t blame them for this. With better jobs, I think they’d recover their power surprisingly quickly. I think they have a situationally-induced case of mild depression that, while it may not be the life-threatening illness we tend to associate with major depression, takes the edge off their abilities. It doesn’t make them unemployable. It makes them slow and bitter but, unlike aging itself, it’s very easily reversible: change the context.

Most of these slowed-down, middle-aged, private-sector programmers wouldn’t qualify for major depressive disorder. They’re not suicidal, don’t have debilitating panic attacks, and can attribute their losses of ability (however incorrectly) to age. Rather, I think that most of them are mildly but chronically depressed. To an individual, this is more of a deflation than a disability; to society, the costs are enormous, just because such a large number of people are affected, and because it disproportionately affects the most experienced people at a time when, in a healthier economic environment, they’d be in their prime.

The tournament of idiots

No one comes out of university wanting to be a private-sector social climber. There’s no “Office Politics” major. People see themselves as poets, economists, mathematicians, or entrepreneurs. They want to make, build, and do things. To their chagrin, most college graduates find that between them and any real work, there’s at least a decade of political positioning, jockeying for permissions and status, and associated nonsense that’s necessary if one intends to navigate the artificial scarcity of the corporate world.

The truth is that most of the nation’s most prized and powerful institutions (private-sector companies) have lost all purpose for existing. Ideals and missions are for slogans, but the organization’s true purpose is to line the pockets of those ranking high within it. There’s also no role or use for real leadership. Corporate executives are the farthest one gets from true leaders. Most are entrenched rent-seekers. With extreme economic inequality and a culture that worships consumption, it should surprise no one that our “leadership” class is a set of self-dealing parasites at a level that hasn’t been seen in an advanced economy since pre-Revolution France.

Leadership and talent have nothing to do with getting to the top. It’s the same game of backstabbing and political positioning that has been played in kings’ courts for millennia. The difference, in the modern corporation, is that there’s a pretense of meritocracy. People, at least, have to pretend to be working and leading to advance further. The work that is most congruent with social advancement, however, isn’t the creative work that begets innovation. Instead, it’s success in superficial reliability. Before you get permission to be creative, you have to show that you can suffer, and once you’ve won the suffering contest, it’s neither necessary nor worth it to take creative risks. Companies, therefore, pick leaders by loading people with unnecessary busywork that often won’t go anywhere, and putting intense but ultimately counterproductive demands on them. They generate superficial reliability contests. One person will outlast the others, who’ll fall along the way due to unexpected health problems, family emergencies, and other varieties of attrition (for the lucky few, getting better jobs elsewhere). One of the more common failure modes by which people lose this tournament of idiots is mild depression: not enough to have them hospitalized, but enough to pull them out of contention.

The corporate worker’s depression, especially in midlife, isn’t an unexpected side effect of economic growth or displacement or some other agent that might allow Silicon Valley’s leadership to sweep it under the rug of “unintended consequences”. Rather, it’s a primary landscape feature of the senseless competition that organizations create for “leadership” (rent-seeking) positions when they’ve run out of reasons to exist. At that level of decay, there is no meaningful definition of “merit” because the organization itself has turned pointless, and the only sensible way to allocate highly-paid positions is to create a tournament of idiots, in which people psychologically abuse each other (often subtly, in the form of microaggressions) until only a few remain healthy enough to function.

Here we arrive at a word I’ve come to dread: corporate culture. Every corporation has a culture, and 99% of those are utterly abortive. Generally, the more that a company’s true believers talk about “our culture”, the more fucked-up the place actually is. See, culture is often ugly. Foot binding, infantile genital mutilation (“female circumcision”), war, and animal torture are all aspects of human culture. Previous societies used supernatural appeal to defend inhumane practices, but modern corporations use “our culture” itself as a god. “Culture fit” is often cited to justify the otherwise inconsistent and, sometimes, unjustifiable. Why wasn’t the 55-year-old woman, a better coder than anyone else on the team, hired? “It wouldn’t look right.” Can’t say that! “A seasoned coder who isn’t rich would shatter the illusion that everyone good gets rich.” Less illegal, but far too honest. “She didn’t fit with the culture.” Bingo! Culture can always be used in this way, by an organization, because it’s a black box of blame, diffusing moral culpability about the group. Blaming an adverse decision on “the team” or “the culture” avoids individual risk for the blamer, but the culture itself can never be attacked as bad. Most people in most organizations actually know that the “leadership team” (career office politicians, also known as executives) of their firm is toxic and incompetent. When they aren’t around, the executives are attacked. But it’s rare that anyone ever attacks the culture because “the culture” is everyone. To indict it is to insult the people. In this way, “the culture” is like an unassailable god.

Full circle

We’ve traveled through some dark territory. The tournament of idiots that organizations construct to select leadership roles, once they’ve ceased to have a real purpose, causes depression. The ubiquity of such office cultures has created, I argue, a silent epidemic of mild, midlife depression that has led venture capitalists (situated at its periphery, their wealth buying them some exit from the vortex of mediocrity in which they must still work, but do not live) and privileged young psuedo-entrepreneurs (terrified of what awaits them when their family connections cool and they must actually work) to conclude that a general cognitive mediocrity awaits in midlife, even though there is no evidence to support this belief, and plenty of evidence from outside of corporate purgatory to contradict it.

What does all of this say? Personally, I think that, to the extent that large groups of individuals and organizations can collectively “know” things, the contemporary corporate world devalues experience because it knows that the experience it provides is of low value. It refuses to eat its own dogfood, knowing that it’s poisoned.

For example, software is sufficiently technical and complex that great engineers are invariably experienced ones. The reverse isn’t true. Much corporate experience is of negative value, at least if one includes the emotional side effects that can lead to depression. Median-case private-sector technology work isn’t sufficiently valuable to overcome the disadvantages associated with age, which is another way of saying that the labor market considers average-case corporate experience to have negative value. I’m not sure that I disagree. Do I think it’s right to write people off because of their age? Absolutely not. Do I agree with the labor market’s assessment that most corporate work rots the brain? Well, the answer is mostly “yes”. The corporate world turns smart people, over the years, into stupid ones. If I’m right that the cause of this is midlife depression, there’s good news. Much of that “brain rot” is contextual and reversible.

How do we fix this?

Biologists and gerontologists seeking insight into longevity have studied the genetics and diet of long-living groups of people, such as the Sardinians and the people of Okinawa. Luckily for us, midlife cognitive decline isn’t a landscape feature of most technical or creative fields. (In fact, it’s probably not present in ours; it’s just perceived that way.) There are plenty of places to look for higher cognitive longevity, because few industries are as toxic as the contemporary software industry. When there is an R&D flavor to the work, and when people have basic autonomy, people tend to peak around 50, and sometimes later. Of course, there’s a lot of individual variation, and some people voluntarily slow down before that age, in order to attend to health, family, spiritual, or personal concerns. The key word to that is voluntary

Modeling and professional athletics (in which there are physical reasons for decline) aside, a career in which people tend to peak early, or have an early peak forced upon them, is likely to be a toxic one that enervates them. Silicon Valley’s being a young man’s game (and the current incarnation of it, focused on VC-funded light tech, is exactly that) simply indicates that it’s so destructive to the players that only the hard-core psychopaths can survive in it for more than 10 years. It’s not a healthy place to spend a long period of time and develop an expertise. As discussed above, it will happily consume expertise but produces almost none of value (hence, its attraction to those with pre-existing specialties, despite failing to respect specialties once the employee is captive). This means, as we already see, that technical excellence will fall by the wayside, and positive-sum technological ambition will give way to the zero-sum personal ambitions of the major players.

We can’t fix the current system, in which the leading venture capitalists are striving for “exits” (greater fools). That economy has evolved from being a technology industry with some need for marketing, to a marketing industry with some need (and that bit declining) for technology. We can’t bring it back, because its entrenched players are too comfortable with it being the way it is. While the current approach provides mediocre returns on investment, hence the underperformance of the VC  asset class, the king-making and career-altering power that it affords the venture capitalist allows him to capture all kinds of benefits on the side, ranging from financial upside (“2 and 20″) to executive positions for talentless drinking buddies to, during brief bubbly episodes such as this one, “coolness”. They like it the way it is, and it won’t change. Rather than incrementally fix the current VC-funded Valley, it must be replaced outright.

The first step, one might say, is to revive R&D within technology companies. That’s a step in the right direction, but it doesn’t go far enough. Technology should be R&D, full stop. To get there, we need to assert ourselves. Rather than answering to non-technical businessmen, we need to learn how to manage our own affairs. We need to begin valuing the experience on which most R&D progress actually relies, rather than shutting the seasoned and cynical out. And, as a first step in this direction, we need to stop selling each other out to nontechnical management for stupid reasons, including but especially “culture fit”.

Inverted placism: a possible future in which Silicon Valley’s a ghetto

I was having brunch with a couple of friends who are lawyers, and we were talking about desirable and undesirable places to live. Seattle (where I may be moving in early 2015) scored high on every list, and one of the attorneys said something to the effect of, “I’d love to live there, but it’s next to impossible to get a job there.” Getting a law job in Seattle is, apparently, ridiculously difficult. This surprised me, because law is even more pedigree-obsessed than VC-funded technology, and where there’s pedigree obsession, there’s placism. Placism, for law, seemed to favor New York and D.C. to the exclusion of all else. There were some attorneys making lots of money in entertainment law (or as divorce lawyers) in Los Angeles, but it wasn’t prestigious to be in a “secondary” market. That seems to be changing, with locations like Seattle and Austin– desirable places to live, no doubt, but not law hubs– becoming very selective, and some moreso than New York.

Ten years ago, in large-firm corporate law (“biglaw”) New York was the place where attorneys wanted to stay as long as they could. Even though the pay wasn’t substantially higher– when adjusted for cost of living, it was invariably lower– the prestige was strong and followed a person for life. The best outcome was to make partner in one’s New York firm. Perhaps 1 in 10 was offered the brass ring of partnership. The next set, those who were clearly good but wouldn’t get partnerships, would move to firms in “secondary markets” and become partners out there. It was acceptable to move out to Austin or L.A. or Seattle, in your mid-30s, if Manhattan partnership wasn’t in the cards, but few planned for it. Law is even more pedigree-obsessed than VC-funded technology, and so placism is pretty major, and the going assumption has, for a long time, been that the best students of the top law schools will invariably end up in New York.

It seems to be changing. More attorneys are considering New York their backup choice, not wanting to put up with the long-hours culture and high rents. It’s no longer considered unusual for top talent to favor other locations, and some of those smaller markets are developing a reputation for being much more selective than New York, the old first choice.

Does anyone care to guess how this might apply to technology?

Silicon Valley isn’t stable

Balaji Srinivasan gave a talk at Y Combinator’s Startup School entitled “Silicon Valley’s Ultimate Exit”. In it, he decried the four traditional urban centers of the United States: New York, Boston, Los Angeles, and Washington, DC. He named that stretch “the Paper Belt”, a 21st-century analogue of the “Rust Belt”. See, all of those cities are apparently run by dinosaurs. Boston is the academic capital, but MOOCs are rendering in-person education obsolete. D.C. is apparently no longer relevant, under the theory that the decline of nation states (which will occur over the next 200 or so years) might as well be concluded to have already happened. New York? All that media stuff’s being replaced by the Internets. Los Angeles? Well, Youtube and iTunes and Netflix have already disrupticated Hollywood, which might as well be relegated to history’s dustbin as well (except for the fact that someone still has to make the content).

Silicon Valley’s arrogance is irritating and insulting. I’m not exactly lacking when it comes to intellectual ability, and on several occasions, I’ve interviewed for a position in the Valley (for the right job, I’ll work anywhere). On multiple occasions it has happened this way: I knock the code sample out of the park, on one occasion submitting one’s considered one of the 3 best submissions. I nail the technical interview. I get the offer… and it’s a junior position because, whatever I accomplished up to this point, I didn’t do it in California. The effect of placism is very real in technology, and it’s strongest in the Valley.

I don’t see this elsewhere. Banks and hedge funds don’t care if you’re from a rural village in China. If you’re smart, they respect it. They have the intellectual firepower to recognize intelligence. What about the Valley? Surely, I’m not saying that the people in the Valley are dumber? Well, it’s not quite that. As individuals, I don’t think there’s a difference. There are A-level intellects everywhere, whether you’re in the middle of Nebraska or in the Valley or on Wall Street. The problem, instead, is that the Valley has a passive-aggressive consensus culture, which means you need to impress several people to get the green light. In New York, it’s typical for an influential person to say, “I like this guy, and those who don’t won’t have to deal with him, but I think he’s fucking brilliant”. In California, that doesn’t happen. This gives intellectual mediocrities (who can, likewise, be found in Valley startups and on Wall Street) a certain show-stopping power (“I don’t think he’s a team player”, “she’s not a culture fit”) that they don’t have, to the same extent, on the East Coast.

For traders and quants, pedigree isn’t all that important. It can get you in the door, but it ceases to matter after that. In the Valley, pedigree matters much more, because recognizing individual excellence challenges the “collaborative culture” and the “laid back” mentality that California is “supposed” to have and, if you can’t bring up a person’s individual firepower, you start defaulting to credentialism and prestige. Not all Stanford grads are geniuses (see: Lucas Duplan). On the East Coast, it’s socially acceptable to say, “He’s fucking stupid and I’m sure his parents bought him in.” That’s a pretty clear “no hire”. In California, you can’t say that! Instead, in the California culture, you end up having to say something like, “Well, his problem-solving skills aren’t what I expected, and I think he’d be unhappy in a technical role, but I guess we can give him a product position and tap the Stanford network.” To me, that’s a “no hire” but, to many managers, that sounds like a ‘yes’.

I had the reverse of this experience when (as part of my consulting practice) I was hiring an engineer for a startup. He was a 20-year-old college graduate, sharp as fuck and probably a better programmer than I was, but socially inept. I said, “he’s brilliant, but would need some mentoring on the social aspect of the work”. To make it clear, I was being as honest as I could be, and my recommendation was to hire him. Unfortunately, my “he’ll need social mentoring” was taken as a passive-aggressive way of saying “no-hire”, rather than a completely honest acknowledgement that a good candidate had (minor) imperfections. He wouldn’t have been hired anyway, nor would he have liked the place, so it didn’t matter in the end. Still, it shocked me that such a minor note against someone (I said, “he’ll need social mentoring”, not “he’s an incorrigible fuckup”) could be taken so far out of proportion.

The point of this digression is that, because people in the Valley refuse to communicate meaningfully, and because of the consensus-driven culture, the rank-ordering of potential candidates that is actually used is the one already furnished. For younger candidates, that’s derived from educational pedigree. For older ones, it comes down to job titles and companies to some small degree, but much more important is location. Placism rules in the Valley.

There’s nothing stable about that, in my view. Academic institutions have lifelong contracts (tenure) with professors and gigantic capital investments, so universities tend to stay put. (Universities that are too prolific with branch campuses, such as NYU having an Abu Dhabi campus, destroy whatever prestige they might otherwise have.) The seat of the U.S. government is, likewise, unlikely to leave D.C. except in event of an unforeseen catastrophe. Hollywood’s geographical advantage (its proximity to a diverse array of terrain types) is still major, because the cost of travel with a film production team is extremely high. New York? New York won’t lose finance (the exchanges are there) and, even if it did, it would still be New York.

That’s something that the Valley, with its arrogant placism, doesn’t get. Let’s say that New York’s financial industry takes a catastrophic dive. We see apartments once valued at $50 million selling for $4 million, and rents dropping to Midwestern levels. And then? Creative people will move in, rapidly, and restore life to the city. New York isn’t beholden to one industry. It will always be New Fucking York. Unless we see a recurrence the 1970s general abandonment of cities by the American population (and, in my lifetime, we probably won’t) the worst-case scenario for it is that it becomes like Chicago: an also-ran city that is, in spite of its lack of “paper belt” specialty, thriving and an excellent place to live.

New York can lose its status as the prestige center of biglaw. It could even lose Wall Street. (That would be a disaster for New York property owners, but the city itself would be resilient.) Silicon Valley, on the other hand, is fucked if the placism of venture-funded technology inverts. That just might happen, too.

Inversion of placism tends to happen when the young and creative decide that the advantages of living in the “prestigious” place are not worth the disadvantages. The rents are too high, the culture is too elitist, and upward mobility is too low. The progeny of well-connected families still end up in the prestigious place (New York biglaw, Valley technology) but the successes of the next generation head elsewhere. Sometimes, they choose another location; others, there’s a sense of diaspora for a while. The Valley could easily lose its singularity. It’s not a great place to live (it’s a strip mall) and it’s far too expensive for what it offers. In truth, everything about it is mediocre except, to some extent, the work; but 95 percent of the work is mediocre (who wants to work in operations at IUsedThisToilet.com?) and getting the other 5 percent requires an advanced degree from a top-5 CS department, or elite connections. A few good people have those and will be able to stomach the Valley, but most good people come from no-name schools (not because the no-name schools are better, but because most people come from no-name schools) and don’t know anyone important out there. In 2010, talented and naive 22-year-olds were willing to move out to the Valley and provide cheap, clueless, highly dedicated labor under the naive (and wrong) assumption that a year at a startup would have them personally introduced to Peter Thiel by the founders. Is that trickery going to work in 2016? I doubt it.

Starting about now, it’s going to become increasingly evident that the talent wants to be outside of Silicon Valley if the same quality of job is available elsewhere. In fact, being in Silicon Valley after 35 will mean one of two things: astronomical success, or dismal mediocrity, with no middle ground. Being in California, at that age, and not being part of the investor community (either as a VC, or as a founder able to raise money on a phone call) will be a mark of shame. If you’re good, you should be able to move to Seattle or Austin, no later than 30, and get the same quality of job. If you’re really good, you can get that kind of job in St. Louis or Nashville. Aside from the outlier successes ($20 million net worth, national reputation) who can make a Silicon Valley residence part of their personal brand, it’ll be the mediocrities who stick around the Valley, still trying to catch a break while ignoring the hints that have been dropped all around them.

By 2020, this will have more of a “diaspora” shape. There won’t be a new tech hub yet. You’ll see talent gravitating toward places like Seattle, Boulder, Portland, Chicago, Austin, Pittsburgh, and Minneapolis, with no clear winner. Millennials are, if not blindly optimistic, attracted to the idea of turning a second-rate city into a first-rate one. By the late 2020s, it will be clear whether (a) new hubs have emerged or (b) technology has become “post-placist”. I’m not going to try to opine on how that will play out. I don’t think anyone can predict it.

Cheap votes

What gives Silicon Valley its current grip on technology? The answer is a concept that seems to recur when aggregations such as democratic elections and markets break down. Cheap votes.

Electoral voting, statistically, can actually magnify the power of a small number of votes. If there are 101 voters and we model 100 votes as coin-flips, the power of the 101st vote isn’t a 1-in-101 chance of swaying the election. It’s about 1-in-13. (Due to the central tendency of the mean, there’s a 7.9% chance that the 100 votes split evenly.) Likewise, the statistical power of a voting bloc increases as the square of its size (in the same as the variance of perfectly correlated identical  variables, when summed, grows as the square of the individual’s variance). What this means is that a small number of voters, acting as a bloc, can have immense power.

Another issue is that many voters don’t really give a damn. Low voter turnout is cited as a negative, but I think it’s a good thing. Disinterested people shouldn’t vote, because all they’ll do is add noise. The ugly side effect of this is that societies generate a pool of cheap votes. Ethical reservations aside, there are plenty of people who care so little about electoral politics that (absent a secret ballot) they’d sell their vote for $100. How much is a vote worth? To the individual, the vote is worth less than $100. But, to many entrenched interests, 500,000 votes (which can sway a national election) is worth a lot more than $50 million.

When you allow vote-buying, power shifts to those who can bundle cheap votes together. That’s obviously a very bad thing for society. Such people tend, historically, to be deeply associated with society’s criminal elements, and corruption ensues. This is one of the major reasons why the secret ballot is so important. Anonymity and privacy in voting are sacred rights, for sure, but we also want to kill the secondary market for cheap votes. There’s no real harm in someone selling his vote to his grandma for $100, but if we allow vote-buying to take place, we give power to some unelected, vicious people who use the statistics of electoral practices to subvert democracy.

Markets are the voting systems of capital allocation and business formation, designed as principled plutocracy rather than a democracy. Of course, just as in democracies, there are a lot of cheap votes to go around. Plenty of middle-class people want to park their savings “somewhere” and watch their numbers go up at a reasonable annual rate, but have no interest in dictating how the sausage is made. They don’t know what the best thing to do with their $500,000 life savings is and, to their credit, they admit as much. So they put that money in bonds or index funds and forget about it. Some of that money ends up in high-risk, high-yield (in theory) venture capital funds.

VCs are the cheap vote packagers of a certain 21st-century question: how do we build out the next stage of capitalism, which requires engagement and autonomy within the labor pool itself to a degree that disadvantages giant organizations? The era of large corporations is ending. It’s not like these companies will disappear overnight, or even in 50 years, but we’re seeing a return to small-scale, niche-player capitalism in which a few small companies manage to have outlier success and (if they want it) can become large ones. VC is the process of taking cheap votes (passive capital) and attempting to influence the formation processes of the nation’s most innovative (again, at least in theory) small businesses.

Abstractly, your typical doctor in St. Louis would rather have more small businesses in the Midwest (his children need jobs, and they may not want to move to Mountain View) than in California, and might prefer his capital being deployed locally. But he has a full-time job and is smart enough to know that he’s not ready to manage that investment actively. So, he parks his money in an “investment vehicle” that has the funds redirected to a bunch of careerists in California who care far more about the prestige of association with news-making businesses (hence, the focus on gigantic exits) than the success of their portfolios. His returns on the investment are mediocre, but his locale is also starved of passive capital, which has all been swept away into the bipolar vortex of Sand Hill Road.

Passive investors don’t care, enough, to pull their funding. In fact, it’s rarely individual investors whose capital ends up directly in venture capital. Because of protections (which may not be well-structured, but that’s another debate) that prevent middle-class individuals from investing directly in high-risk vehicles, it’s actually large pension funds and university endowments (increasing the indirection) that tend to end up in VC coffers. With all this indirection, it’s not surprising that passive investors would tacitly accept the current arrangement, which congests Northern California while starving the rest of the country. But is this arrangement stable? I think not. I think that, while it takes time, people eventually wake up. When it happens, San Francisco may still possess its urban charm, but the Valley itself is properly screwed.