Sorry Quora, but you just did what guilty people do.

I’ve obviously been paying a lot of attention (and “lurking”, although my interest in posting there is pretty much gone for good) to Quora– the site that banned me because I jokingly challenged investor Paul Graham to a rap duel– and the mess that followed. I may be a controversial figure for my 2012 exposure of stack-ranking at Google, or for many of the lies I’ve exposed inside the startup game, or maybe just because some people hate static typing, but I was a model contributor to Quora by any definition. I don’t have rally people. Quora’s users and employees are already pissed-off on my behalf. If anything, I’m trying to hold them back.

What has disappointed me about Quora’s conduct is not the ban itself. The dominant theory seems to be that it was subjected to undue pressure by investors and had no choice. However, this spot-on post by “James Crann” (a declared pseudonym) was, unfortunately, yanked from the site this morning. I’m glad that I was able to catch it. It was gone before 9:00 am. The post was inoffensive and reasonable. It did not even assert that one explanation of “QuoraGate” was the correct one. There was no reason to remove it from the site, unless something needed to be covered up.

Forgive any errors, as I’ve had to hand-type this text out from a screenshot.

Why was Michael O. Church banned from Quora?

James Crann (ed. note: this was a declared pseudonym).

I have no idea, but it’s almost certainly not the official explanation.

One possibility is the “investor-level extortion” theory that Michael has put forward. See his blog post: What the September 4, 2015 Quora disaster (#QuoraGate) tells us about VC-funded tech’s future. Another is that someone at Quora felt that Michael was getting “too big” and had to be taken down. Many forums ban posters who seem to be “breaking away” with a powerful following, and Michael is one of the most-followed non-celebrity posters. Or it could have been some other private grudge. Marc Bodnick and Michael Church always seemed to respect each other, but they had very different political views. It does seem odd that a site would take such action against a popular user, and there are a number of possible answers. I’m not as quick as Michael is to jump to a specific one of them, because I saw the Y Combinator feud as entertainment more than as a threat to anything. Honestly, I thought it possible that the YCs were in on the joke and using it for free publicity.

Quora could be having strings pulled on it, or it could be covering up for an overzealous admin who just triggered a land mind. To me, stupidity is as feasible an explanation for this as an investor extortion.

As I mentioned in a comment on Ryan Chew’s answer, there are several reasons why the “sock puppet” explanation is almost certainly untrue. For one thing, Michael Church has 8,590 followers and anything he posts gets at least 10 legitimate upvotes, sometimes hundreds. I have a hard time believing that Michael Church has these scores of sock puppets, all with rich histories and many tied to real people, and that his success on Quora isn’t due entirely to the quality of his answers.

Second, sock puppeting wouldn’t be very effective on Quora because it has a PageRank-like system wherein the socks’ votes would (rightfully) be assigned a low level of credibility. (ed. note: I believe this to be correct, though I have no inside knowledge). For answer placement, who is doing the voting matters more than the raw number of up- or downvotes (and that makes sense).

I also don’t buy that Michael [has used] sock puppets to troll. He doesn’t seem to need the cover of anonymity to voice controversial opinions. He wants what he is saying tied to his real name. And while I’ve only met him a few times (ed. note: I don’t know who this person is, and I’m not going to share my guesses) he doesn’t seem to have a lot of free time and I can’t imagine that he has the time or interest necessary to run a sock puppet army.

Finally, it’s inconceivable that Quora would violate user privacy just because it suspects someone who doesn’t need sock puppets has an alternate account. Quora may suspect that Michael has more than one account (because it’s the Internet, and most people do) but they’re obviously using sock puppetry as a “We know what’s best for you” evasive answer. Something else is going on. It could be a VC putting pressure on Quora, or it could be an incompetent admin.

Disclaimer: I am not using my real name, because I fear a ban on my real-name account for speaking the truth about this.

Here are the photos that establish that this post (now not merely “collapsed” but actually covered up) did exist.

Screen Shot 2015-09-07 at 5.46.55 AM

Screen Shot 2015-09-07 at 5.47.18 AM

That post was removed around 7:00 am Pacific time. If there is a silver lining to all this, it’s that some poor bastard had the job of doing a corporation’s cover-up work at 7 am on a holiday. I would bet that he is a lot more upset with Quora than I am.

To my supporters at Quora

A dominant topic among employees of Quora, over this weekend, has been whether they’re going to stay with the company in light of its decision on September 4 to ban my account. Quora’s management seems to be concerned about the threat of attrition in the wake of this. So, I feel compelled to comment.

I’ve been in technology for 10 years, so I’m going to say a few things. First, if you’re seriously considering leaving your job on my behalf, you probably shouldn’t move based on that, alone. A user ban against an unobjectionable and popular user is a problem, and bad-faith use of administrative privileges is a big deal, but you have to look out for your own career. Don’t do anything rash. This isn’t as big of a deal as it seems, right now.

Obviously, it goes without saying that you shouldn’t leave one job without lining up the next one. If you’re presently employed, your offers seem to be about 20 percent better than what you get if you’re unemployed. There’s also the job hopper stigma to worry about: one short job is acceptable, but two or three in a row starts to hurt you. You probably also shouldn’t tell your boss, if you choose to leave Quora, that you’re doing it because of “9/4”. He’s not going to want to hear it. When a company faces a sudden morale crisis of this magnitude, the last thing that a manager wants is a “you too?”

Personally, I appreciate the support. However, I’ve also “been there” and the tech world can be extremely vindictive. No matter how incensed you are about Quora’s decision to ban me, leave on good terms if you choose to leave, and strongly consider not leaving Quora if it’s just over this. Just trust me on this one, okay?

Now, opinions in general seem to be split down the middle on why my account was banned. Half of the people I talk to seem to believe the investor-level extortion hypothesis, and the other half seem to think it’s something more mundane, like an admin settling a score or just a bureaucratic mistake. One person implicated a specific Quora administrator who intends on applying to Y Combinator in the next cycle. (No one believes the “official” explanation involving sock puppets.)

If it turns out that the decision to ban me came from inside Quora, then this would make the company itself ethically suspect and, by all means, leave the company if that is the case. It’s too early to make such a claim, however.

On the other hand, if Quora was subjected to investor-level threats, then I implore you to understand, at the very least, that Quora had no choice in the matter. Investors’ threats are a big deal and it’s far, far better for one user to be banned from a website than for 115 peoples’ jobs to be put at risk. It could be that “9/4” was the less horrible of two options. We still have to wait and see, but don’t do anything rash on my behalf. I’m doing just fine.

Thank you all for your continuing support.

What the September 4, 2015 Quora disaster (#QuoraGate) tells us about VC-funded tech’s future

For those who don’t know the back story: under some kind of investor-level pressure, I was banned from Quora on September 4, 2015. No valid reason was given. I was a “Top Writer” and Quora frequently published my answers to venues including Fortune, Time, BBC, Forbes, and The Huffington Post. Some people (including me) believe that it’s connected to the feud that Dan Gackle and Paul Buchheit started with me last month, involving direct pressure either from Y Combinator (an investor in Quora) or people purporting to act on Y Combinator’s behalf.

I’m really sick of tech beefs, and I assume that my readers are, too. I’m not going to talk about the feud. These things are just so fucking stupid, I can’t stand to be in them anymore.

I’m going to talk about what Quora’s recent action means for everyone else, because this isn’t about just me.

Of course, I could still be wrong in my model of what happened. If Quora reverses the ban and attributes it to an embarrassing bureaucratic or technical error, I’ll accept that resolution, and consider it likely that they are being truthful in the assertion that it was a glitch. It isn’t too late to undo the harm. I don’t fault Quora, at least not until I know more, for its role in this.

So, everything I’m going to write assumes that my sources of information are correct. We’ll know which explanation is right by Tuesday or Wednesday– I don’t expect anything to change over the weekend– based on whether or not Quora un-bans my account. Given the morale problem that my banning has triggered internally at Quora– I’ve gotten to know several Quorans personally– it seems reasonable that Quora will reverse the ban in any scenario except for an existential threat to the company in the form of investor-level extortion.

I find no fault in Quora, at least not now. When faced with the decision between banning one user and having to fire 115 people because of a fundraising problem, the choice is obviously the former. So I have nothing negative to say about Quora. This is about the venture capitalists who chose to extort. Why did they sink so low?

I’ve been in technology for long enough to know that there are some sleazy people in it, but I’ve never seen sleaze hit the product level. For example, many technology companies stack rank their employees, and investors screwing founders is an old a game as empire, but tech companies don’t often use intentional, bad-faith product failures to punish users. For example, Google and Facebook don’t allow their employees to stalk their ex-significant-others, and will fire someone who tries to do so. That was the world we were used to: one where products weren’t intentionally let fail to punish users, because companies and investors knew that users’ faith in the product was more valuable than settling some silly feud.

Here’s what September 4, 2015 means (or, at least, seems to mean). It means that VCs will threaten hundreds of jobs to settle a minor score in a feud that the other side (to be honest) wasn’t even taking entirely seriously. (I found it hard to believe that “Don’t Be Evil” Paul Buchheit would condescend to feud with me, so it was more of a fun joke, that I wanted to see if I could keep going, than a serious beef.) It means that, due to the importance of social proof and “signaling” in Silicon Valley, a single influential investor can pressure a company into bad-faith uses of its product. If you piss anyone off, no matter how silly your slight is, you can’t trust anything that the Valley has built. If you challenge an overfed three-digit millionaire to a stupid rap battle, you might face a defamatory user ban on an unrelated Q&A forum.

What happened last Friday, itself, doesn’t matter. I’ll find other uses of my time. Far more interesting is what this says about Silicon Valley and the fundamental brittleness of everything that it has built. If investors are willing to forcibly compromise the ethical integrity of a billion-dollar company over a goofy feud, then what happens if the stakes are higher?

Brittany Smith

It’s November 8, 2021, five months to the day after 25-year-old Brittany Smith, a former associate at a venture capital firm, was awarded $6.3 million at the end of a lawsuit against her billionaire ex-boss, Tom Smyrr, for sexual harassment. It was an obvious, open-and-shut case, even with Mr. Smyrr’s expensive legal team. In February 2020, on a trip to New York– a one-party-consent state for audio recording of conversation– he threatened to fire her, and ruin her reputation within Silicon Valley, if she didn’t perform a sexual favor. She said “no”– and her phone, recording everything, saved her career. She lost her job, and couldn’t get another one, because Tom Smyrr had slandered her throughout the industry, so severely that even her ex-boss’s enemies wouldn’t hire her. The past summer’s victory in the courtroom was the first step toward clearing her name, and it seems to have worked. Today she had her first interview! It went well, she thinks.

It’s 6:45 pm. It’s dark, rainy, and cold outside. Brittany’s exhausted from the interview and wants to get back to her hotel as quickly as she can. Not knowing where to find a cab in this strange city, she uses the new ride-hailing app, Vyper. Three minutes later, the driver arrives: an unsmiling man, with dark shades. She asks herself: is it even legal to wear sunglasses and drive at night? Eh, whatever, she thinks. She just wants to get to the hotel and go to sleep.

It’s 7:09. Brittany notices that she’s crossing a bridge, and it’s one she hasn’t seen before. Where am I? She checks her location on Loqate Maps. For twenty minutes, the driver’s been going the wrong way! Her heart starts pounding. This isn’t right“Excuse me, sir,” she says. “This isn’t the way to my house.” Must be an honest mistake, she thinks. Or is he going to rob me? she wonders. “I’m sorry,” the man says. “It’s that damn Loqate bug. It’s sending me on a bad route. I’ll get you out of here.”

The Loqate bugThat was fixed 15 months ago! Why is this man running outdated software? No, she says to herself, don’t judge. Not everyone keeps current with software updates. “I just started driving for Vyper this week,” he says. She calms down a little. He’s an older man, with a gentle and intelligent aura about him. Ten minutes of conversation with the man leaves her feeling relaxed. Okay, not a robber, not a creep, just a new driver. She’s absolutely miffed about a 15-minute ride taking half an hour (and counting) so the driver assures her that her ride will be uncharged.

It’s 7:26. Brittany looks out the window. She’s in a deserted, industrial, unattractive part of town, with dilapidated warehouses on both sides. This guy is terrible at route planning. Whatever, she thinks, he said he’s not charging me so there’s no meter to worry about.

She hears a “ping!” on her phone; her boyfriend shared a news article. She reads it. She’s too tired to find it funny and quickly finds herself (almost by force of habit) thumbing through her backlog of TechPress posts. July… not much worth reading. August… nothing. September… same-old stuff. October…

October 3, 2021: Billionaire Tom Smyrr invests $320 million in Vyper at $1.1 billion valuation.

I’m in a Vyper, Brittany realizes. Unease rises. That pressure behind her head, that throbbing in her neck, that sudden full-body sweating… are all new sensations to her. The driver is a hit man! Her eyes hit the speedometer. 18 mph. Seatbelt off, she pulls the door handle. Locked. Ok, that could be a safety measure. Most cars auto-lock at 10 mph, she reminds herself. Maybe this is what a panic attack feels like. She’s shaking, crying, banging on the door. Or maybe he’s going to kill me! Full-on panic. She screams, “Stop the car! Stop the fucking car! Now!

(To be continued?)

We are in new, weird, scary territory. I don’t like it.

Nonzero shit-fan interaction coefficient

I’ve been involved in a few high-profile tech feuds, and the not-surprising conclusion that I’m coming to is that they’re a waste of time.

Dan Gackle and Paul Buchheit, both associated with Y Combinator, chose to start a beef with me last month. Dan G., moderator of Hacker News, banned my account from Hacker News while intentionally taking one of my comments way out of context, then deleting that comment in a bad-faith attempt to represent his out-of-context interpretation as “official”. Paul Buchheit continued the feud by lobbing defamatory accusations at me on Quora. I don’t like to start fights, but I’ll end them on my own terms.

Under pressure that seems to have come directly from Y Combinator, Quora banned my account shortly before 1:00 pm. I was a model contributor, a three-year Top Writer, and never given any warning about conduct on the site or even the slightest inclination that I was in anything other than good standing. The ban came out of the blue. Jeff Meyerson and I worked together on the February 1, 2015 Quoracast, and Alecia Li Morgan worked with me on publishing several of my answers, and they were both great people to collaborate with.

To make it clear, I don’t harbor any ill will toward Quora and I certainly don’t hold any toward its employees, many of whom I’ve worked with in the past, and who seem to be exemplary citizens. Quora participated in a Y Combinator round (and probably regrets it now, since the company seems to have lost critical autonomy) and is thus, to some degree, connected to a rat’s nest of bad intentions that it can’t possibly control. I don’t fault Quora or anyone there for it. I assume good faith in the company itself and its people, and believe the incoming information indicating that the extortion laid upon them by external forces was so extreme as to leave them no other option.

The story coming to me (from multiple sources, as of this morning) is:

  • a source inside Quora has given me that some Quora employees are aware of the ban, and disagree (some strongly) with the decision. The consensus among Quorans (even including management) who know the situation is unanimous that I shouldn’t have been banned. I thank them for their continuing support. There are many Quora employees of whom I think very highly, and I don’t hold this against them in any way.
  • all evidence indicates that Quora was pressured to ban me by people associated with (and possibly part of) Y Combinator, retaliating because an anonymous contributor to Quora leaked the fact that Paul Graham’s animus toward me is largely based on this December 2013 blog post. Y Combinator seems to be acting under the assumption that the “leaker” is me, which can’t possibly be valid, because whoever did leak that fact clearly knows Paul Graham personally, and I don’t.

There is, I must note, a small chance that I am wrong. I don’t expect much to change over the Labor Day weekend but if, by Tuesday or Wednesday, my Quora account is un-banned, we’ll be able to chalk this up to an embarrassing technical or bureaucratic mistake and forget about the whole thing. If my account remains banned, the only sensible explanation will be the one that confirms the rumors of external pressure placed on Quora. An abuse of power (and, quite probably, outright extortion) from some (presumably investor-level) entity with power over Quora will be, literally, the only the only possible explanation (having ruled out the “bureaucratic mistake” explanation). This won’t prove that Y Combinator itself is responsible, but it will strongly suggest such a claim, especially in light of this enormously stupid tech beef they’ve decided to have with me.

As for Paul Buchheit and Dan Gackle, you guys need to man the fuck up, apologize for your atrocious behavior, and let us end this stupid feud. It’s obnoxious, and it’s a waste of my time. I’m sick of your shit. Thanks in advance.

Engineers as clerks? How programmers failed to get the status they deserve.

One of the most interesting essays I’ve read recently is “I Would Prefer Not To”: The Origins of the White Collar Worker, which describes the institution and existence of the 19th-century office clerk. Bartleby, the passively resistant clerk whose “I would prefer not to” leads to ruin and failure– in contrast to the prototypical clerk who performs the work eagerly, invested in the belief in graduation to a better job– has become one of the most famous American literary characters of all time. It wasn’t a great life to be a clerk: work spaces were cramped and hot (as now in technology, but without air conditioning) and the work was extremely dull. However, for many wishing to enter the American bourgeois, it provided social mobility due to its proximity to wealthier people: the emerging class of industrialists, bureaucrats and traders who’d come to be known as businesspeople.

I’m going to clip two paragraphs from this excellent essay, in order to explain certain historical forces that still dominate American business. Emphasis is mine:

Tailer’s worries over his position were common in a clerking world where the distance between junior clerk and partner was seen as both enormous and easily surmountable. No other profession was so status conscious and anxiety-driven and yet also so straightforward seeming. No matter how dull their work might be at any given moment, there was little doubt that clerks saw themselves, and were seen by their bosses, as apprentice managers—businessmen in training. Few people thought they would languish as clerks, in the way that it became proverbial to imagine people spending their lives in a cubicle, or how for decades becoming a secretary was the highest position a woman office worker could aspire to. Part of the prestige of clerking lay in the vagueness of the job description. The nature of the dry goods business meant that clerks often spent time in the stores where their goods were sold, acting as salesmen and having to be personable to customers. In other words, the duties of clerks were vast enough to allow them to be tasked with anything, which meant that so much of their work depended upon so many unmeasurable factors besides a clerk’s productivity: his attitude, good manners, even his suitability as a future husband for the boss’s daughter. A good clerk besieged his bosses’ emotions the way he did customers—flattering them to the point of obsequiousness, until the bosses were assured that they had a good man on their hands. These personal abilities were part of the skill set of a clerk—something we know today as office politics—and though they couldn’t be notched on a résumé, they were the secret of the supposed illustriousness of business life. The work might dehumanize you, but whatever part of you that remained human was your key to moving up in the job.

This was also the reason clerks felt superior to manual laborers. Young men entering a factory job had no illusions about running the factory, which is why a few of them began to join the nascent American labor movement. But clerks were different from people who “worked with their hands,” and they knew it—a consciousness that Tailer registers when he declares the “awkward and clumsy work” of a porter unworthy of him. Young men who wanted to get into business knew they had to clerk, and they also knew that clerks could and often did eventually become partners in their firms. “Time alone will suffice to place him in the same situation as those his illustrious predecessors now hold!” Tailer wrote in one entry, loftily referring to himself in the third person. But though patience was the signal virtue of clerking—to write on, as Bartleby did, “silently, palely, mechanically”—impatience was its most signal marker. From the shop floor, the top of the Pittsburgh steel mill looked far off indeed. But in the six-person office, it was right next to you, in the demystified person of the fat and mutton-chopped figure asleep at the rolltop desk, ringed with faint wisps of cigar smoke.

Clerking degraded over time, as companies became larger and business became more oligarchical, meaning that the probability of advancement into the sort of role that one had actually joined the company for, over time, declined for a clerk as the number of people willing to be clerks rose faster than the number of wealthy business people the economy could support. Clerking, of the traditional variety, also became skippable for people of upper-class descent, thus losing its prestige. In the beginning of the 19th century, virtually everyone who wanted to become a businessman (and, in that time, it was mostly men) went through a clerking phase, but in the late Gilded Age, the scions of robber barons didn’t have to clerk in order to become full-fledged businessmen, which led the more ambitious and intelligent of the era to determine, increasingly, that clerking was dishonorable because, clearly, some people were “good enough” to skip that phase. (Doesn’t this sound like the “everyone who’s worth a damn starts a company (trigger warning: link contains syphilitic idiocy from Paul Graham)” attitude in Silicon Valley today? Yes, much that seems new is the old, repeated.) Then business schools were formed, the best of these able to skip someone over the clerking phase into The Business proper, typically when that person was under 30. Clerking lost its former prestige and, with the declining odds of progress to the business ranks, the outcome that made the passage worth it, seemed to die out.

Did it, really, though? Obviously, the job title of “clerk” doesn’t exist in any way that has the 19th century meaning, but I would say that clerking, culturally speaking, is still alive. Look at a typical Fortune 500 corporation. Most of the people in the business are (like a 19th-century clerk) in mostly-evaluative positions designed to lead into The Business, but they’re not called clerks, because there’s more specialization to that phase of the business career than there was in 1853. They’re accountants or executive assistants or marketing staff, with more prestige than “non-career” workers but strictly less prestige than executives (unlike physicians or professors, who live on a completely different scale). The assumption (not to call it valid) in each of these departments is that an X who’s any good is going to become a manager of X’s within 4-5 years and an executive (i.e. paid and treated like an actual business person) within 8 to 10 years. You might start as a process engineer or a salesperson, but if you don’t become a part of The Business proper within a certain amount of time, you’ve been marked as “non-career”. You weren’t invited to join the company; you just worked there.

Clerking evolved from an apprenticeship to a tournament at which most would fail, and the post-1870-ish specialization of the clerkship phase meant like must compete against like, as is true even today, for the limited supply of positions in The Business proper. Accountants competed with other accountants for the limited leadership positions, and marketing analysts went against other marketing analysts, and so on for each field. There was one group that realized, very quickly, that they were getting screwed by this: lawyers. Law is, of all the specialties that a business requires, perhaps the most cognitively demanding one that existed, at least with substantial numbers, in the late 19th century. It tended (like computer programming, today) to draw in the hyper-cerebral types insistent on tackling the big intellectual challenges. Or, to put it more bluntly, they were a very smart pool and it was tough for a lawyer to distinguish himself with a level of intelligence that would be dominant had time and opportunity brought him into a different pool, but might only be average among attorneys.

Lawyers realized that they were getting burned by “like competes against like” in the tournament for the opportunity to become actual partners in the business (i.e. executives and owners). The good news, for them as a tribe, was that they knew how to control and work the law, that being their job. They professionalized. They formed the American Bar Association (in the U.S.) and made it standard that, in corporate bureaucracies, lawyers report only into other attorneys. In-house corporate attorneys report to other attorneys, up to the General Counsel, who reports to the board (not the CEO). Law firms cannot be owned by non-lawyers. Accreditation also ensures (at least, in theory) a basic credibility for all members of the profession: one who loses a client is still a lawyer. The core concept here is that, while an attorney is a provider of services, attorneys are not supposed to be business subordinates (clerks). They have the right and obligation to follow ethical mandates that supersede managerial authority. This requires that the profession back them if they lose their jobs; the value of a limited-supply accreditation is that a person who is fired for exercising that (mandatory) ethical independence remains marketable and (in theory, at least) can resume his or her career, more or less, uninterrupted. Without that type of assurance, that level of ethical and professional independence is quite obviously impossible.

It may have made sense for most people in the corporate sphere to accept the (increasingly small) chance of ascent into The Business as enough of a reward to offset the negatives of being business subordinates. Perhaps only 5 to 10 percent of people in most departments were “Business quality”– I don’t know the answer to that– but lawyers knew that a higher percentage of them were, and that they wouldn’t get a fair shake in a like-competes-against-like system. In essence, they seceded from the existing, inadequate career track and formed a labor cartel (which is what professionalization is) in order to have a somewhat better one. They began to gain independent power and prestige, and managed to create their own firms in which terms for attorneys were (at least, for a time) better than what the standard corporate track offered them.

Clerks in Silicon Valley

The engineer-driven maker’s culture of Silicon Valley is dead. VC-funded startups are, in most ways, large corporations. They’re young businesses but they aren’t small, and their notion of agility is misleading. They’re more like missiles, designed to hit one target or explode “harmlessly” (to investors) than they are like fighter jets. Since VCs encourage founders only to take one kind of risk (two sources of risk is considered to be too many) and developing a single new product is that supposed to be that source of risk, the result is that most of these companies have very traditional organizational structures. “Flat” organization is often a signifier of an undocumented and unstable hierarchy than of hierarchy’s constitutional nonexistence. It often doesn’t mean open allocation so much as an environment of undefined leadership in which half the people think they are the de facto leaders, and in which you have to watch your back for randos trying to manage you. The result of this is that most VC-funded startups feel very much like Fortune 500 companies, except with shitty open-plan offices and different perks (better breakfast, worse health benefits).

The clerkship model has lived on in most of business. Accountants aspire to be CFOs, HR people aspire to be VPs of HR or COOs, and it’s a reasonable assumption that everyone in almost every department who’s any good is planning to become part of Business proper (not necessarily in that company, with some going to other firms or starting their own) some day. The result of this is that the darker side of the clerkship model– total subordination to The Business– is tolerable. In-house accountants and marketing analysts (unlike software engineers) don’t object to the idea that they should subordinate to business people, because they expect to be business people within a decade. A ladder (if not the internal ladder, an external ladder elsewhere) will be extended to the smart ones, and the not-smart ones aren’t thinking far enough ahead to object.

That same assumption, while held over software engineers by executives, doesn’t actually hold up. It’s not reasonable to assume that every smart programmer will be invited to join The Business after ten years because there aren’t enough spots. Moreover, as with law, a programmer barely knows the field at four years of experience. If everyone who was any good at software became a manager after four years, and an executive after eight, then there’d be no one left in the field to write high-quality software. (This might explain why there is so little high-quality software out there.) The world of computing actually needs for people who are genuinely talented and smart not to move into management after five years, because someone has to write code that isn’t horrendous.

Executives don’t know what to make of this. What is wrong with these people, that most of them don’t want to become executives themselves? Do we have some bizarre constitutional antipathy toward the concept of making money? (Of course not.) The truth of the matter is that engineers don’t not want to become executives. We like money and prestige and status as much as anyone else. We just know the odds, and we won’t work as hard as someone less talented will for a corporate lottery ticket. Like competes against like, and we’re in the smartest and toughest pool by far in most organizations. In most fields of business, an IQ of (say) 130 would make one a pre-eminent protégé from the outset, and a 140 would stand out enough to have executives (assuming they could recognize intelligence at that level; some can and some can’t) tripping over each other to be a person’s mentor. In software, a 130 might still be somewhat above average, but it’s not special, because we’re just a more competitive pool. If you walked into the Goldman Sachs analyst program with a 130+ IQ and not-horrible social skills, you’d stand out enough that in your first week, your MD would be asking you whether you wanted your work assignments over the next year to lead to direct promotion or Harvard Business School or buy-side placement, and plan your next 12 months accordingly. (You’d still have to deal with punishing hours; no one escapes that.) In programming? 130 is enough to handle almost all of the work, but it doesn’t make you a stand-out.

Programmers, to tell the truth, are a bad group to land with, professionally speaking, for two reasons. At raw intelligence, we’re the highest sub-discipline of business that there is (excluding, perhaps, computer hardware engineers and research scientists such as biochemists in drug companies). We’re smart, which means that people would be pre-eminent protégés and direct promotes anywhere else are just average to average-plus among us. Even in the 140s and 150s and up, we’re reminded daily of flaws in our logic by this nasty-tempered subordinate called the computer that “would prefer not to” correct our spelling errors (and that’s actually a good thing). On the other hand, in social skills, we’re one of the worst. We don’t look out for each other or protect the group, and we don’t have the organizational skills to operate collectively or even guard ourselves against divide-and-conquer tactics (e.g. stack ranking and Scrum) from The Business. In peer-based performance reviews, actual business people are wise enough not to sell each other out without a reason, and give each other glowing reviews. We give “honest” reviews, because we’re a bunch of social imbeciles who don’t realize that a collective willingness to sell each other to management while getting nothing in return is devastating to us (all of us, even the best performers) as a group. So, when you’re a programmer, the skills that are competitive in the field (i.e. it can hurt you to have superior colleagues) are amped to 11 while the skills that are cooperative (i.e. having superior colleagues is to your benefit, because you back each other) are thin on the ground. It’s our lack of organizational ability and collective self-respect that keeps our status low. Some think that in my writings on programming in the business world that I’m railing against “evil businessmen” and I’m not. As people, they aren’t any more evil or greedy than anyone else. Our failure to achieve the status we deserve, as technologists, is on us. If we don’t demand higher status, we won’t get it. Business people aren’t evil, but they didn’t get where they are by being generous, either.

Business people do note our disinterest, relative to other professional specialties, in working on the projects that the business values most. This is an artifact of our accurate appraisal of our odds of rising to a level where we actually care about a specific company’s performance. Let’s say that 5 percent of white-collar people actually get into “Exec” or “The Business” after 8 years. That’s a 95 percent chance of wasting eight years of one’s life doing grunt work on a promise that was never delivered. The potential rewards are considerable, and we (as programmers and technologists) like money as much as anyone else, but we know that the odds are poor. That small chance of being promoted into “The Business”, which we see as a bureaucratic machine that mostly prevents people from getting work done, isn’t enough to motivate us. So, we’ll favor the project that enriches our future career prospects (and, if we recognize a ceiling where we are, our ambitious become external) over the one that benefits The Business.

Programmers have another trait that confuses executives, which is that we don’t see highly detailed work as dishonorable grunt work that one wishes to graduate out of, as soon as possible. In fact, relative to executives, there’s a complete inversion in the relation between detail-orientation and prestige. Everywhere else in business, work that is hazily defined and evaluated subjectively (i.e. it’s good work if people like you, regardless of whether it’s right) is the most prestigious, because there is the least risk in it. Executives only have to worry about being liked by other executives; workers have to be liked and get the work done right, which makes the latter position riskier and less prestigious because the presumption is that anyone with social skills and drive will get into something less exposed to fluctuations in one’s own performance (and to random human error). Programmers, on the other hand, have created their own bizarre culture where intellectually demanding and detailed work is the most prestigious. “Low level” programming sounds terrible to an executive, and “back-end” sounds like “back office” to management types, but most of the best programmers gravitate toward that kind of work, and away from the business-facing “front end”, because we find requests like “make the button a different shade of blue” to be demeaning and intellectually fruitless, whereas we find the highly detailed and cerebrally taxing challenges of “low level” computing to be much more fulfilling. Business people realize that companies are extremely complex and that making “the big decisions” requires having an army of trusted people who can digest the complexity; to them, leadership involves stepping away from the details. Programmers, on the other hand, want to zero in on the precise, careful stuff that bores most executives. One might think that this orthogonality of interests could create a symbiotic pairing of equals between businessmen and engineers, but it rarely happens that way, because the former don’t want to see the detail-seeking weirdos in the latter category as their social equals. The executives have the power to start, and they keep it, and as a result the high-power minds in most organizations are also the most disengaged ones.

As the business sees us, we’re still clerks, and that’s a raw deal, because the number of leadership positions is small while the number of us who are intellectually capable of ascent into The Business is much higher (like, over 80 percent as opposed to the less than 10 percent who’ll actually get there) than in any other sub-field. This is exactly the problem that attorneys (also a high-IQ sort) faced: the clerking game, with like competing against like, hurt them, because they’d always have a surplus of strong people who couldn’t be given (and who would not have wanted, were there other high-power career options) management roles. They realized (as we ought to) that they were too valuable and powerful to accept the “5-10 percent of you will be selected, after an evaluative period lasting several years, for ascent into The Business; the rest of you will be viewed as leftovers and excreted over time” deal that everyone else got.

Unlike lawyers, we haven’t succeeded in creating the labor cartel that would be necessary if we wanted businesses to pay us what we’re worth in cash instead of empty promises. We have used the complete inability of those who pay us to evaluate our work to create an “art for art’s sake” culture wherein credibility among skilled engineers matters more than traditional corporate credibility. That has made the job more fun, but it hasn’t increased our pay or relative status. Also, our “art for art’s sake” culture has given us the rather negative reputation of being averse to working for The Business. That’s not even accurate! We don’t like to work for The Business as a subordinate. We know the odds on that clerk game, and they aren’t good. If The Business were willing to meet us as equals, we could work together and the orthogonality of our affinities (our attraction to detailed and difficult work, their attraction to subjective and holistic work) could be mutually beneficial; but they’re not willing to do so.

The clerk system also doesn’t work for engineers because of the massive talent inversion. Just as the lowest officer outranks the highest enlisted man, the lowest executive outranks the highest non-executive in most companies. In other words, the top programmers are still lower than the lowest executives, including those who ascended along far less competitive, non-engineering ladders. For example, at Google it is genuinely difficult to become a Director-equivalent Principal Engineer, because there are only a handful of those, whereas it’s hilariously easy (i.e., unless you fuck up severely, you’ll get it inside of a few years) to reach the Director level on the management track. It doesn’t make sense. Noting the comical talent inversion that comes with the concept of the programmer as a business subordinate, we have a hard time respecting the official order that the company tries to put out as the consensus on the value of each person. We know that it’s a thousand times harder to become an executive-equivalent “individual contributor” engineer than to become an executive, so treating us like plebeians with aggressive relative down-titling is going to leave us cold.

Unfortunately, we haven’t got the organizational skills to come up with anything appreciably different from the archaic clerking system that originally justified total subordination to The Business. It’s what we work under. There are plenty of good software engineers who don’t move up into the executive ranks (and, for all I know, that could also be true in other departments) but business executives assume that there aren’t, and consequently, “engineer” means “leftovers”. It means low autonomy (Scrum!) and equity slices well below one-tenth of what an equivalent business person would earn. We’re wise enough to that culture to apply proper cynicism in Fortune 500 companies. The work that is beneficial to our career objectives (which may or may not involve climbing that particular company’s ladder) we do well– and if we will be able to commit the work to open-source and gain an external credibility, we’ll do it very well– and the rest of it we “prefer not to”, and we can often get away with it because such an incoherent clamor of requirements comes at us that it’s impossible to do everything, and because it’s basically impossible for someone who isn’t one of us to evaluate our work for how difficult it is or how long it should take except on some inaccurate emotional basis that, if actually enforced, will just result in good people getting fired and morale going up in smoke. “Product managers” and the like can yell at us, but they really have no idea what we’re up to. This isn’t ideal for us, nor for our employers, and it leads many companies into a culture of prevailing mediocrity (at least, in engineering) as relations between engineering and The Business decline. The point, though, is that we have a lot of latent power that we have no idea how to use. We haven’t figured out how to assert ourselves and get some sort of social equality. Or, perhaps, we prefer not to.

Ambitious software engineers don’t like this arrangement. We don’t want to give middling efforts to behemoth companies that couldn’t give a shit about us. This has traditionally led us in the direction of entrepreneurship, and companies have had to create special positions to retain engineering talent. Corporate programmers are viewed as “the leftovers” not because all the good ones are plucked into The Business, but because (at least, in theory) the ones with any talent are supposed to start companies, become independent consultants, or move into pure research or architecture roles that distance us from the ugly parts of corporate software engineering like “make the thingy thing work this slightly different way” requests and pager duty. Contrary to stereotype, there are some excellent software engineers at Fortune 500 companies, but almost all of them find a way to a “research engineer” track before the age of 35, because churning through tickets from The Business is not a job that anyone wants (unless that person has a delusional belief that such work will lead to rapid ascent into The Business; but, honestly, your odds are better– still very low, but better– if you directly email the CEO and explicitly ask to be his protégé.) Most large companies allow mainstream engineering to turn into a ghetto while putting all of the technology organization’s surplus smart people (i.e. talented people who can’t or don’t want to become executives) into an R&D group. The problem with this approach is that, while R&D relies on mainstream engineering for implementation, a growing resentment between the small, protected R&D group and the gritty, underappreciated, Scrum-ticketed “mainstream eng” leads to diminishing clout for the former. The R&D engineers are highly paid and given nicer titles, but they aren’t listened to because, as far as the embittered “left behind” programmers in mainstream engineering are concerned, they don’t do any of “the real work”. The end result is that most of these R&D engineers end up spending time on “fun projects” that never go into production, and are eventually cycled out of the company.

What this tells us is that Fortune 500 companies can, at least in some cases, recognize top software talent and its value, contrary to stereotype. They don’t necessarily get it right, all of the time, at an individual level, and there will always be sharp people who remain stuck in mainstream engineering; but they do realize the need to have some A-level talent in-house, and they have the insight to know that if “work” is fending off a bukkake of Scrum tickets and user stories, A-level people will leave. That’s what an “innovation lab” or a COE (“center of excellence”) is for: to protect top talent. For the individual engineer, it’s unfortunately not terribly stable. There’s the perennial threat of a research cutback, in tough times, meaning that one is punted into mainstream engineering where the Scrumlords lurk. Usually, this happens when money is tight and management roles are being doled out (in lieu of compensation) to retain the decent programmers in mainstream engineering, which means that the “former R&D” engineers don’t even in land in management positions (all of those being taken by talented people in mainstream eng. that the company needs desperately to retain) or even on “green field”/new-code projects, but at the taint bottom, being asked “to help out” on legacy maintenance. Of course, if you want to turn a smart engineer into an ineffective idiot, forcing him to maintain idiotic legacy code (but without power, because the power must be doled out in lieu of compensation to key people already in mainstream engineering) is a very effective way of doing that.

Large companies don’t have a stable plan when it comes to top engineering talent. Labs and research divisions get cut so often that “research engineer” isn’t always the best long-term career path. It works if you live in that otherwise toxic cesspool called “The Bay Area”, because there are so many tech companies there, but it’s an erratic life for anyone else because research engineering positions are less common and it can require a geographic move to get one. The “software architect” track is more stable, but can be dangerously overconnected; because the work of the architects effects so many people, there are far too many meetings and horrible lines-and-boxes drawings and this forces the architect to delegate even the enjoyable parts of the job.

Corporate bureaucracies struggle with outliers in general, and intellectual outliers are a special breed of difficult, and intellectual outliers who aren’t eligible for rapid promotion into the executive ranks (because there aren’t enough spots, or because they prefer to write code and don’t want to be executives, or because they aren’t “a cultural fit” for the boardroom) are pretty much intractable. So… for the past fifteen years, a going assumption has been that such “intractable” high-talent people should, as if it were just that easy, just start companies and become founders. So, how has that played out? Poorly. Why? Because the VCs have proven themselves to be smart at a game that very few people understand.

The venture-funded ecosystem in Silicon Valley is the first postmodern corporate organization. It chooses not to be, legally and formally, one company; instead, it’s a federation of about twenty marquee venture capital firms and the few hundred technology corporations that live and die and are bought (i.e. the founders get paid performance bonuses arranged by their VCs and people who owe favors to the VCs and work in large companies, and the startups are assimilated into those large companies) at their whim. More feudal than traditionally bureaucratic, “Silicon Valley” doesn’t have a clear president, CEO, or leader. It’s a fleet of a few hundred well-connected investors who all know each other and make decisions as a group, like an executive suite, but who work for nominally competing firms. Its main contribution to the business world is the notion of the disposable company. The core innovation of the VC-funded iteration of Silicon Valley has nothing to do with technology itself, but with the understanding that “companies” are just pieces of paper, that can be thrown away at convenience.

Cutting a division in a corporation is hard, because the company wants to retain some of the talent that’s within that division, but that makes the controversy over the decision persist. If you cut the self-driving car project and make an AI researcher work on “user stories” and answer to a Scrumlord, you have a pissed-off, very intelligent (and, therefore, probably quite articulate) person under your roof who will make of himself a constant reminder that things used to be better. On the other hand, the clean cut (that is, firing the whole division, cutting generous severance checks, and moving on) is seen as too brutal by the masses and too expensive by HR to be justifiable. The disposable company is the solution to this problem. In a single large company, cutting a division leaves the rest of the company to question your judgment, while attuned people in other departments start to wonder what fate has in store for them. On the other hand, an executive suite (VCs) running a fleet of disposable companies can just stop funding one of them and it, because of the massive burn rate that it needed to take on to meet your demands, runs out of money and dies.

The VC-funded dynamic also allows for title inflation. Middling product managers, put in charge of their own disposable companies, can be called “CEO”, while the actual executives have the away-from-the-action-seeming title of “investor”. This allows the people with actual power and status to build up extremes of power distance that seem innocuous. In a large company, executives who deliberately ruined a middle manager’s professional reputation would be accused of harassment and bullying, sued, and possibly terminated for the publicity risk brought on the company. In the VC-funded world, a founder who runs afoul of investors is blacklisted for it, but without consequences for investors. The cynic in me suspects that the appeal of the “acqui-hire” system is that it allows a bunch of “good ol’ boys” to circumvent HR policies: you can’t not-hire someone over a protected status, but you can not-fund her.

More importantly from an engineer’s perspective, the dishonest presentation of the career structure of the VC-funded world enables a brash, young male quixotry that investors believe (not for good reasons) is the key to technical innovation. The myth is, “you could become a founder and march to your own beat“. The reality is that “founder” is just a middle management title in a job that very occasionally delivers a large performance bonus. The cleverness behind all of this is that it manages to reframe what business is, in such a way that engineers are left with a severe distaste for it. First of all, VC-funded companies have a hilariously high rate of failure: possibly 80 to 90 percent. This is presented as normal business risk, but it’s not; the actual 5-year survival rate of new businesses is around 50 to 60 percent (which isn’t very different from the 5-year survival rate of any new job, these days; I’d love to work in a position where there were even odds that it’d be worth it to keep coming into work 5 years later) and many of those “failures” performed acceptably but didn’t offset the opportunity cost for the proprietor. The VCs want founders and peasant engineers to believe that it’s the nature of business to implode in fiery wreckage because it enables them to take massive risks with other peoples’ careers. Worse yet, VC-funded companies have a severe correlation risk, as anyone who was in technology in 2001 can attest. The rate of total business failure (and, thus, job loss, often without severance because the money literally isn’t there) is low during comfortable times, but spikes. When it does, the peasant engineer loses a job at the same time as many other people are losing their jobs. Second of all, in order to make the founder job look “too difficult” for the typical engineering peasant, the fundraising process has been made into a six-month, soul-raping ordeal. No one would ever tolerate a six-month interview process where breaches of ethics and privacy (such as back-channel reference checking) are considered normal, for a regular middle management position. It’s the dressing-up of the position as being something more– an “entrepreneur” rather than a head of a disposable company, responsible for continually managing up into the investor class– that makes it salable.

This entire system obscures lines of sight and it solidifies power for the entrenched. The best way to hold power is to convince those who don’t have it that they don’t want it, and that’s what the VCs have done. They’ve made the middle-management job– being a “founder”– so intolerable that it appeals only to the egotistical, convincing the peasants that they don’t want to rise but should accept their state of subordination. What’s more, this ruse hides the “effort thermocline” (the point at which jobs become less accountable and easier with increasing compensation and social position) by placing it not within a company but between firms: the founders live at that painful top-of-the-bottom point just below the effort thermocline, and investors get the easier life, above it. The line-of-sight, from an engineer’s point of view, is that you have to change jobs twice to get into the executive suite: first, you become a founder and a hustler and a champion fundraiser with little time for intellectual or technical enrichment; second you become an “investor” which, again, is a completely different and not-exciting-sounding job. For the programmer, the visible (again, we’re talking about lines of sight rather than actualities) path to social status and power is so fraught with peril and malice and career-wrecking personal risk (since investors can black-list insubordinate founders, to a degree that would be enforceably illegal in any other setting) that the direct path seems not worth taking.

Yet Silicon Valley is driven by engineers who genuinely believe that they’ll become part of The Business! Otherwise, they wouldn’t throw down the 90-hour weeks. If it seems like I’m being inconsistent, here, that’s not the case. I’m ascribing an inconsistent attitude. See, software engineers in the VC-funded world are smart enough to know that the average-case outcomes are undesirable and that the direct path to power requires selling one’s soul. Where they are misled is that they’re brought to believe in indirect paths to power that don’t actually exist. The engineer works 90-hour weeks because his company “is different” and because the founders promised him “introductions” to investors that will supposedly enable him to bypass the hell that plebes are put through when they try to raise money. Most Americans, for an analogy, rate “politicians” quite lowly, and yet they tend to think highly of their politicians (which is why the same people keep getting elected). As they tend to see it: “Congress” is awful; their Senators and Representatives, however, are good guys who fight the system for them. There’s a similar dynamic in the young, white/Asian, upper-middle-class male quixotry that powers Silicon Valley. These software engineers are cynical and smart enough to realize that most “corporate executives” are worthless parasites, but they rate their executives highly. Like the degenerate misogynist who puts a woman on a pedestal as soon as she smiles at him, they put their own executives/founders on their good sides because those people treat them with basic, superficial decency (while negotiating them into employment contracts with 3-year non-solicits and 0.05% equity in a post-A company). Not a subtle bunch, software engineers tend not to realize that actual corporate sociopaths aren’t like the flamingly obvious “cartoon asshole” bosses in the movies.

There’s more that I could say about this. I’m at 6.4 kilowords and I don’t know how to end this essay, but end it I should, because it’s gotten long enough already. In essence, Silicon Valley has managed to confuse a certain set of inexperienced but talented software programmers into a permanent clerk status, without them realizing what’s going on. With lines of sight obscured by disposable companies and social distractions (“the cool kids” and “30 under 30” lists) and various other machinations, software engineers have been led into accepting an arrangement (corporate clerkship, requiring total subordination but offsetting it by a small chance of selection into The Business proper) that they rejected, the last time it was presented to them. Like everything else that happened in California in its golden age, Silicon Valley has been commoditized and made into a brand, and it has been leveraged brilliantly to make a powerful set of people (specifically, talented software engineers) ignore their own interests. Talented (if politically naive) young people, mostly software engineers, who wouldn’t be caught dead in the typical corporate arrangement (the clerkship system, which is still the management model for large companies’ technology organizations) will gladly throw down 90-hour work weeks in exchange for 0.01% of someone else’s company (“but the founders promised me investor contact, and I know that they’ll deliver because the CEO is so nice to me!”) In reality, if they’re going to work that hard, they should figure out how to organize around their own interests, and win.

Java is Magic: the Gathering (or Poker) and Haskell is Go (the game)

It may be apocryphal, but there’s a parable in the Go (for this essay, I will never refer to Google’s programming language, so I’m talking about the ancient board game) community in which a strong player boasts about his victory over a well-known professional player, considered one of the best in the world. He said, “last month I finally beat him– by two points!” His conversation partner, also a Go player, is unimpressed. She says, “I’ve also played him, and I beat him by one point.” Both acknowledge that her accomplishment is superior. The best victory is a victory with control, and control to a margin of one point is the best.

Poker, on the other hand, is a game in which ending a night $1 up is not worthy of mention, unless the bidding increment is measured in pennies. The noise in the game is much greater. The goal in Poker is to win a lot of money, not to come out slightly ahead. Go values an artful, subtle victory in which a decision made fifty moves back suffices to bring the one-point advantage that delivers a game. Poker encourages obliterating the opponents. Go is a philosophical debate where one side wins but both learn from the conversation. Poker is a game where the winner fairly, ethically, and legally picks the loser’s pocket.

Better yet, I could invoke Magic: the Gathering, which is an even better example for this difference in what kinds of victories are valued. Magic is a duel in which there are an enormous number of ways to humiliate your opponent: “burn decks” that enable you to do 20 points of damage (typically, a fatal sum) in one turn, “weenie decks” that overrun him with annoying creatures that prick him to death, land and hand destruction decks that deprive him of resources, and counterspell decks that put everything the opponent does at risk of failure. There are even “decking decks” that kill the opponent slowly by removing his cards from the game. (A rarely-triggered losing condition in Magic is that a player unable to draw a card, because his active deck or “library” has been exhausted, loses.) If you’re familiar with Magic, then think of Magic throughout this essay; otherwise, just understand that (like Poker) it’s a very competitive game that usually ends with one side getting obliterated.

If it sounds like I’m making an argument that Go is good or civilized and that Magic or Poker are barbaric or bad, then that’s not my intention because I don’t believe that comparison to make sense, nor am I implying that those games are bad. The fun of brutal games is that they humiliate the loser in a way that is (usually) fundamentally harmless. The winner gets to be boastful and flashy; the loser will probably forget about it, and certainly live to play again. Go is subtle and abstract and, to the uninitiated, impenetrable. Poker and Magic are direct and clear. Losing a large pot on a kicker, or having one’s 9/9 creature sent to an early grave with a 2-mana-cost Terror spell, hurts in a way that even a non-player, unfamiliar with the details of the rules, can observe. People play different games for different reasons, and I certainly don’t consider myself qualified to call one set of reasons superior over any other.

Software

Ok, so let’s talk about programming. Object-oriented programming is much like Magic: there are so many optional rules and modifications, often contradicting, available. There are far too many strategies for me to list them here and do them justice. Magic, just because its game world is so large, has inevitable failures of composition: cards that are balanced on their own but so broken in combination that one or the other must be banned by Magic‘s central authority. Almost no one alive knows “the whole game” when it comes to Magic, because there are about twenty thousand different cards, many introducing new rules that didn’t exist when the original game came out, and some pertaining to rules that exist only on cards written in a specific window of time. People know local regions of the game space, and play in those, but the whole game is too massive to comprehend. Access to game resources is also limited: not everyone can have a Black Lotus, just as not everyone can convince the boss to pay them to learn and use a coveted and highly-compensated but niche technology.

In Magic, people often play to obliterate their opponents. That’s not because they’re uncivilized or mean. The game is so random and uncontrollable (as opposed to Go, with perfect information) that choosing to play artfully rather than ruthlessly is volunteering to lose.

Likewise, object-oriented programmers often try to obliterate the problem being solved. They aren’t looking for the minimal sufficient solution. It’s not enough to write a 40-line script that does the job. You need to pull out the big guns: design patterns that only five people alive actually understand (and, for which, 4 of those 5 have since decided that they were huge mistakes). You need to have Factories generating Factories, like Serpent Generators popping out 1/1 Serpent counters. You need to use Big Products like Spring and Hibernate and Mahout and Hadoop and Lucene regardless of whether they’re really necessary to solve the problem at hand. You need to smash code reviews with “-1; does not use synchronized” on code that will probably never be multi-threaded, and you need to build up object hierarchies that would make Lord Kefka, the God of Magic from Final Fantasy VI, proud. If your object universe isn’t “fun”, with ZombieMaster classes that immediately increment values of fields in all Zombies in the heap in their constructors and decrement those same fields in their finalizers, then you’re not doing OOP– at least, as it is practiced in the business world– right, because you’re not using any of the “fun” stuff.

Object-oriented programmers play for the 60-point Fireballs and for complex machinery. The goal isn’t to solve the problem. It’s to annihilate it and leave a smoldering crater where that problem once stood, and to do it with such impressive complexity that future programmers can only stand in awe of the titanic brain that built such a powerful war machine, one that has become incomprehensible even to its creator.

Of course, all of this that I am slinging at OOP is directed at a culture. Is object-oriented programming innately that way? Not necessarily. In fact, I think that it’s pretty clear Alan Kay’s vision (“IQ is a lead weight”) was the opposite of that. His point was that, when complexity occurs, it should be encapsulated behind a simpler interface. That idea, now uncontroversial and realized within functional programming, was right on. Files and sockets, for example, are complex beasts in implementation, but manageable specifically because they tend to conform to simpler and well-understood interfaces: you can read without having to care whether you’re manipulating a robot arm in physical space (i.e. reading a hard drive) or pulling data out of RAM (memory file) or taking user input from the “file” called “standard input”. Alan Kay was not encouraging the proliferation of complex objects; he was simply looking to build a toolset that enables to people to work with complexity when it occurs. One should note that major object-oriented victories (concepts like “file” and “server”) are no longer considered “object-oriented programming”, just as “alternative medicine” that works is recognized as just “medicine”.

In opposition to the object-oriented enterprise fad that’s losing air but not fast enough, we have functional programming. I’m talking about Haskell and Clojure and ML and Erlang. In them, there are two recommended design patterns: noun (immutable data) and verb (referentially transparent function) and because functions are first-class citizens, one is a subcase of the other. Generally, these languages are simple (so simple that Java programmers presume that you can’t do “real programming” in them) and light on syntax. State is not eliminated, but the language expects a person to actively manage what state exists, and to eliminate it when it’s unnecessary or counterproductive. Erlang’s main form of state is communication between actors; it’s shared-nothing concurrency. Haskell uses a simple type class (Monad) to tackle head-on the question of “What is a computational effect?”, one that most languages ignore. (The applications of Monad can be hard to tackle at first, but the type class itself is dead-boring simple, with two core methods, one of which is almost always trivial.) While the implementations may be very complex (the Haskell compiler is not a trivial piece of work) the computational model is simple, by design and intention. Lisp and Haskell are languages where, as with Go or Chess, it’s relatively easy to teach the rules while it takes time to master good play.

While the typical enterprise Java programmer looks for an excuse to obliterate a simple ETL process with a MetaModelFactory, the typical functional programmer tries to solve almost everything with “pure” (referentially transparent) functions. Of course, the actual world is stateful and most of us are, contrary to the stereotype of functional programmers, quite mature about acknowledging that. Working with this “radioactive” stuff called “state” is our job. We’re not trying to shy away from it. We’re trying to do it right, and that means keeping it simple. The $200/hour Java engineer says, “Hey, I bet I could use this problem as an excuse to build a MetaModelVisitorSingletonFactory, bring my inheritance-hierarchy-record into the double-digits, and use Hibernate and Hadoop because if I get those on my CV, I can double my rate.” The Haskell engineer thinks hard for a couple hours, probably gets some shit during that time for not seeming to write a lot of code, but just keeps thinking… and then realizes, “that’s just a Functor“, fmaps out a solution, and the problem is solved.

While not every programmer lives up to this expectation at all times, functional programming values simple, elegant solutions that build on a small number of core concepts that, once learned, are useful forever. We don’t need pre-initializers and post-initializers; tuples and records and functions are enough for us. When we need big guns, we’ve got ’em. We have six-parameter hyper-general type classes (like Proxy in the pipes library) and Rank-N types and Template Haskell and even the potential for metaprogramming. (Haskell requires the program designer to decide how much dynamism to include, but a Haskell program can be as dynamic as is needed. A working Lisp can be implemented in a few hundred lines of Haskell.) We even have Data.Dynamic in case one absolutely needs dynamic typing within Haskell. If we want what object-oriented programming has to offer, we’ll build it using existential types (as is done to make Haskell’s exception types hierarchical, with SomeException encompassing all of them) and Template Haskell and be off to the races. We rarely do, because we almost never need it, and because using so much raw power usually suggests a bad design– a design that won’t compose well or, in more blunt terms, won’t play well with others.

The difference between games and programming

Every game has rules, but Games (as a concept) has no rules. There’s no single principle that unifies games that each game must have. There are pure-luck games and pure-skill games, there are competitive games and cooperative games (where players win or lose as a group). There are games without well-defined objective functions. There are even games where some players have objective functions and some don’t, as with 2 Rooms and a Boom‘s “Drunk” role. Thus, there isn’t an element of general gameplay that I can single out and say, “That’s bad.” Sometimes, compositional failures and broken strategies are a feature, not a bug. I might not like Magic‘s “mana screw” (most people consider it a design flaw) but I could also argue that the intermittency of deck performance is part of what makes that game addictive (see: variable-schedule reinforcement, and slot machines) and that it’s conceivable that the game wouldn’t have achieved a community of such size had it not featured that trait.

Programming, on the other hand, isn’t a game. Programs exist to do a job, and if they can’t do that job, or if they do that job marginally well but can never be improved because the code is incomprehensible, that’s failure.

In fact, we generally want industrial programs to be as un-game-like as possible. (That is not to say that software architects and game designers can’t learn from each other. They can, but that’s another topic for another time.) The things that make games fun make programs infuriating. Let me give an example: NP-complete problems are those where checking a solution can be done efficiently but finding a solution, even at moderate problem size, is (probably) intractable. Yet, NP-complete (and harder) problems often make great games! Go is PSPACE-complete, meaning that it’s (probably) harder than NP-complete, so exhaustive search will most likely never be an option. So is Microsoft’s addictive puzzle game Minesweeper. Tetris and Sudoku are likewise computationally hard. (Chess is harder, in this way, to analyze, because computational hardness is defined in terms of asymptotic behavior and there’s no incontrovertibly obvious way to generalize it beyond the standard-issue 8-by-8 board.) It doesn’t have to be such a way, because human brains are very different from computers, and so there’s no solid reason why a game’s NP-completeness (or lack thereof) would bear on its enjoyability to humans, yet the puzzle games that are most successful tend to be the ones that computers find difficult. Games are about challenges like computational difficulty, imperfect information (network partitions), timing-related quirks (“race conditions” in computing), unpredictable agents, unexpected strategic interactions and global effects (e.g. compositional failures), and various other things that make a human social process fun, but often make a computing system dangerously unreliable. We generally want games to have traits that would be intolerable imperfections in any other field of life. The sport of Soccer is one where one’s simulated life depends on the interactions between two teams and a tiny ball. Fantasy role-playing games are about fighting creatures like dragons and beholders and liches that would cause us to shit our pants if we encountered them on the subway because, in real life, even a Level 1 idiot with a 6-inch knife is terrifying.

When we encounter code, we often want to reason about it. While this sounds like a subjective goal, it actually has a formal definition. The bad news: reasoning about code is mathematically impossible. Or, more accurately, to ask even the simplest questions (“does it terminate?” “is this function’s value ever zero?”) about an arbitrary program in any Turing-complete language (as all modern programming languages are) is impossible. We can write programs for which it is impossible to know what they do, except empirically, and that’s deeply unsatisfying. If we run a program that fails to produce a useful result for 100 years, we still cannot necessarily differentiate between a program that produces a useful result after 100.1 years and one that loops forever.

If the bad news is that reasoning about arbitrary code is impossible, the good news is that humans don’t write arbitrary code. We write code to solve specific problems. Out of the entire space of possible working programs on a modern machine, less than 0.000000001 percent (with many more zeros) of possible programs are useful to us. Most syntactically correct programs generate random garbage, and the tiny subspace of “all code” that we actually use is much more well-behaved. We can create simple functions and effects that we understand quite well, and compose them according to rules that are likewise well-behaved, and achieve very high reliability in systems. That’s not how most code is actually written, especially not in the business world, the latter being dominated by emotional deadlines and hasty programming. It is, however, possible to write specific code that isn’t hard to reason about. Reasoning about the code we actually care about is potentially possible. Reasoning about randomly-generated syntactically correct programs is a fool’s errand and mathematically impossible to achieve in all cases, but we’re not likely to need to do that if we’re reading small programs written with a clear intention.

So, we have bad news (reasoning about arbitrary code is formally impossible) and good news (we don’t write “arbitrary code”) but there’s more bad news. As software evolves, and more programmers get involved, all carrying different biases about how to do things, code has a tendency to creep toward “arbitrary code”. The typical 40-year-old legacy program doesn’t have a single author, but tens or hundreds of people who were involved. This is why Edsger Dijkstra declared the goto statement to be harmful. There’s nothing mathematically or philosophically wrong with. In fact, computers use it in machine code all the time, because that’s what branching is, from a CPU’s perspective. The issue is the dangerous compositional behavior of goto— you can drop program control into a place where it doesn’t belong and get nonsensical behavior– combined with the tendency of long-lived, multi-developer programs using goto to “spaghettify” and reach a state where it is incomprehensible, reminiscent of a randomly-generated (or, worse yet, “arbitrary” under the mathematician’s definition) program. When Dijkstra came out against goto, his doing so was as controversial as anything that I might say about the enterprise version of object-oriented programming today– and yet, he’s now considered to have been right.

Comefrom 10

Where is this whole argument leading? First, there’s a concept in game design of “dryness”. A game that is dry is abstract, subtle, generally avoiding or limiting the role of random chance, and while the game may be strategically deep, it doesn’t have immediate thematic appeal. Go is a great game, and it’s also very dry. It has white stones and black stones and a board, but that’s it. No wizards, no teleportation effects, not even castling. You put a stone on the board and it sits there forever (unless the colony is surrounded and it dies). Go also values control and elegance, as programmers should. We want our programs to be “dry” and boring. We want the problems that we solve to be interesting and complex, but the code itself should be so elegant as to be “obvious”, and elegant/obvious things are (in this way) “boring”. We don’t want that occurrence where a ZombieMaster comes into play (or the heap) and causes all the Zombies to have different values in otherwise immutable fields. That’s “fun” in a game, where little is at stake and injections of random chance (unless we want a very-dry game like Go) are welcome. It’s not something that we want in our programs. The real world will throw complexity and unpredictability at us: nodes in our networks will fail, traffic will spike, and bugs will occur in spite of our best intentions. The goal of our programs should be to manage that, not to create more of it. The real world is so damn chaotic that programming is fun even when we use the simplest, most comprehensible, “dryest” tools like immutable records and referentially transparent functions.

So, go forth and write more functions and no more SerpentGeneratorGenerators or VibratorVisitorFactory patterns.

Academia, the Prisoner’s Dilemma, and the fate of Silicon Valley

In 2015, the moral and cultural failure of American academia is viewed as a fait accompli. The job market for professors is terrible and will remain so. The academy has sold out two generations already, and shows no sign of changing course. At this point, the most prominent function of academia (as far as the social mainstream is concerned) isn’t to educate people but to sort them so the corporate world knows who to hire. For our society, this loss of academia is a catastrophe. Academia has its faults, but it’s too important for us to just let it die.

To me, the self-inflicted death of academia underlies the importance of social skills. Now, I’m one of those people who came up late in terms of social interaction. I didn’t prioritize it, when I was younger. I focused more on knowledge and demonstration of intelligence than on building up my social abilities. I was a nerd, and I’m sure that many of my readers can relate to that. What I’ve learned, as an adult, is that social skills matter. (Well, duh?) If you look at the impaired state that academia has found itself in, you see how much they matter.

I’m not talking about manipulative social skills, nor about becoming popular. That stuff helps an individual in zero-sum games, but it doesn’t benefit the collective or society at large. What really matters is a certain organizational (or, to use a term I’ll define later, coordinative) subset of social skills that, sadly, isn’t valued by people like academics or software engineers, and both categories suffer for it.

Academia

How did academia melt down? And why is it reasonable to argue that academics are themselves at fault? To make it clear, I don’t think that this generation of dominant academics is to blame. I’d say that academia’s original sin is the tenure system. To be fair, I understand why tenure is valuable. At heart, it’s a good idea: academics shouldn’t lose their jobs (and, in a reputation-obsessed industry, such a loss often ends their careers) because their work pulls them in a direction disfavored by shifting political winds. The problem is that tenure allowed the dominant, entrenched academics to adopt an attitude– research über alles— that hurt the young, especially in the humanities. Academic research is genuinely useful, whether we’re talking about particle physics or medieval history. It has value, and far more value than society believes that it has. The problem? During the favorable climate of the Cold War, a generation of academics decided that research was the only part of the job that mattered, and that teaching was grunt work to be handed off to graduate students or minimized. Eventually, we ended up with a system that presumed that academics were mainly interested in research, and that therefore devalued teaching in the evaluation of academics, so that even the young rising academics (graduate students and pre-tenure professors) who might not share this attitude still had to act according to it, because the “real work” that determined their careers was research.

The sciences could get away with the “research über alles” attitude, because intelligent people understand that scientific research is important and worth paying for. If someone blew off Calculus II but advanced the state of nuclear physics, that was tolerated. The humanities? Well, I’d argue that the entire point of humanities departments is the transmission of culture: teaching and outreach. So, while the science departments could get away with a certain attitude toward their teaching and research and the relative importance of each– a “1000x” researcher really is worth his keep even if he’s a terrible teacher– there was no possible way for humanities departments to pull it off.

To be fair, not every academic individually feels negatively about teaching. Many understand its importance and wish that it were more valued, and find it upsetting that teaching is so undervalued, but they’re stuck in a system where the only thing that matters, from a career perspective, is where they can get their papers published. And this is the crime of tenure: the young who are trying to enter academia are suffering for the sins of their (tenured, safe) predecessors.

Society responded to the negative attitude taken toward teaching. The thinking was: if professors are so willing to treat teaching as commodity grunt work, maybe they’re right and it is commodity grunt work. Then, maybe we should have 300 students in a class and we should replace these solidly middle-class professorships with adjunct positions. It’s worth pointing out that adjunct teaching jobs were never intended to be career jobs for academics. The purpose of adjunct teaching positions was to allow experienced non-academic practitioners to promote their professional field and to share experience. (The low salaries reflect this. These jobs were intended for successful, wealthy professionals for whom the pay was a non-concern.) They were never intended to facilitate the creation of an academic underclass. But, with academia in such a degraded state, they’re now filled with people who intended to be career academics.

Academia’s devolution is a textbook case of a prisoner’s dilemma. The individual’s best career option is to put 100% of his focus on research, and to do the bare minimum when it comes to teaching. Yet, if every academic does that, academia becomes increasingly disliked and irrelevant, and the academic job market will be even worse for the next cohort. The health of the academy requires a society in which the decision-makers are educated and cultured (which we don’t have). People won’t continue to pay for things that seem unimportant to them, because they’ve never been taught them. So, in a world where even most Silicon Valley billionaires can’t name seven of Shakespeare’s plays and many leading politicians couldn’t even spell the playwright’s name, what should we expect other than academia’s devolution?

Academia still exists, but in an emasculated form that plays by the rules of the corporate mainstream. Combining this with the loss of vision and long-term thinking in the corporate world (the “next quarter” affliction) we have a bad result for academia and society as a whole. Those first academics who created the “research über alles” culture doomed their young to a public that doesn’t understand their value, declining public funding, adjunct hell and second and third post-docs. With the job market in tatters, professors became increasingly beholden to corporations and governments for grant money, and intellectual conformism increased.

I am, on a high level, on the side of the academics. There should be more jobs for them, and they should get more respect, and they’re suffering for an attitude that was copped by their privileged antecedents in a different time, with different rules. A tenured professor in the 1970s had a certain cozy life that might have left him feeling entitled to blow off his teaching duties. He could throw 200 students into an auditorium, show up 10 minutes late, make it obvious that he felt he had better things to do than to teach undergraduates… and it really didn’t matter to him that one of those students was a future state senator who’d defund his university 40 years later. In 2015, hasty teaching is more of an effect of desperation than arrogance, so I don’t hold it against the individual academic. I also believe that it is better to fix academia than to write it off. What exists that can replace it? I don’t see any alternatives. And these colleges and universities (at least, the top 100 or so most prestigious ones) aren’t going to go away– they’re too rich, and Corporate America is too stingy to train or to sort people– so we might as well make them more useful.

The need for coordinative action

Individuals cannot beat a Prisoner’s Dilemma. Coordination and trust are required in order to get a positive outcome. Plenty of academics would love to put more work into their teaching, and into community outreach and other activities that can increase the relevance and value assigned to their work, but they don’t feel like they’ll be able to compete with those who put a 100% focus on research and publication (regardless of the quality of the research, because getting published is what matters). And they’re probably right. They’re in a broken system, and they know it, but opposing it is individually so damaging, and the job market is so competitive, that almost no one can do anything but the individually beneficial action.

Academic teaching suffers from the current state of affairs, but the quality of research is impaired as well. It might have made sense, for individual benefit, for a tenured academic in the 1970s to blow off teaching. But this, as I’ve discussed, only led society to undervalue what was supposed to be taught. The state now for academia has become so bad that researchers spend an ungodly amount of time begging for money. Professors spend so much time in fundraising that many of them no longer perform research themselves; they’ve become professional managers who raise money and take credit for their graduate students’ work. To be truthful, I don’t think that this dynamic is malicious on the professors’ part. It’s pretty much impossible to put yourself through the degrading task of raising money and to do creative work at the same time. It’s not that they want to step back and have graduate students do the hard work; it’s that most of them can’t, due to external circumstances that they’d gladly be rid of.

If “professors” were a bloc that could be ascribed a meaningful will, it’s possible that this whole process wouldn’t have happened. If they’d perceived that devaluing teaching in the 1970s would lead to an imploded job market and funding climate two decades later, perhaps they wouldn’t have made the decisions that they did. Teach now, or beg later. Given that pair of choices, I’ll teach now. Who wouldn’t? In fact, I’m sure that many academics would love to put all the time and emotional energy wasted on fundraising into their teaching, instead, if it would solve the money problem now instead of 30 years from now. The tenure system that allowed a senior generation of academics to run up a social debt and hand their juniors the bill, and academia’s stuck in a shitty situation that it can’t work its way out of. So what can be done about it?

Coordinative vs. manipulative social skills

It’s well-understood that academics have poor social skills. By “well understood”, I don’t mean that it’s necessarily true, but it’s the prevailing stereotype. Do academics lack social skills? In order to answer this question, I’m going to split “social skills” up into three categories. (There are certainly more, and these categories aren’t necessarily mutually exclusive.) The categories are:

  • interpersonal: the ability to get along with others, be well-liked, make and keep friends. This is what most people think of when they judge another person’s “social skills”.
  • coordinative: the ability to resolve conflicts and direct a large group of people toward a shared interest.
  • manipulative: the ability to exploit others’ emotions and get them to unwittingly do one’s dirty work.

How do academics stack up in each category? I think that, in terms of interpersonal social skills, academics follow the standard trajectory of highly intelligent people: severe social difficulty when young that is worst in the late teens, and resolves (mostly) in the mid- to late 20s. Why is this so common a pattern? There’s a lot that I could say about it. (For example, I suspect that the social awkwardness of highly intelligent people is more likely to be a subclinical analogue of a bipolar spectrum disorder than a subclinical variety of autism/Asperger’s.) Mainly, it’s the 20% Time (named in honor of Google) Effect. That 10 or 20 percent social deficit (whether you attribute it to altered consciousness, via a subclinical bipolar or autism-spectrum disorder, or whether you attribute it to just having other interests) that is typical in highly intelligent people is catastrophic in adolescence but a non-issue in adulthood. A 20-year-old whose social maturity is that of a 17-year-old is a fuckup; a 40-year-old with the social maturity of a 34-year-old would fit in just fine. Thus, I think that, by the time they’re on the tenure track (age 27-30+) most professors are relatively normal when it comes to interpersonal social abilities. They’re able to marry, start families, hold down jobs, and create their own social circles. While it’s possible that an individual-level lack of interpersonal ability (microstate) is the current cause for the continuing dreadful macrostate that academia is in, I doubt it.

What about manipulative social skills? Interpersonal skills probably follow a bell curve, whereas manipulative social skill seems to have a binary distribution: you have civilians, who lack them completely, and you have psychopaths, who are murderously good at turning others into shrapnel. Psychopaths exist, as everywhere, in academia, and they are probably not appreciably less or more common than in other industries. Since academia’s failure is the result of a war waged on it by external forces (politicians devaluing and defunding it, and corporations turning it toward their own coarser purposes) I think it’s unlikely that academia is suffering from an excess of psychopaths within its walls.

What academia is missing is coordinative social skill. It has been more than 30 years since academia decided to sell out its young, and the ivory tower has not managed to fix its horrendous situation and reverse the decline of its relevance. Academia has the talent, and it has the people, but it doesn’t have what it takes to get academics working together to fight for their cause, and to reward the outreach activities (and especially teaching) that will be necessary if academia wants to be treated as relevant, ever again.

I think I can attribute this lack of coordinative social skill to at least two sources. The first is an artifact of having poor interpersonal skills in adolescence, which is when coordinative skills are typically learned. This can be overcome, even in middle or late adulthood, but it generally requires that a person reach out of his comfort zone. Interpersonal social skills are necessary for basic survival, but coordinative social skills are only mandatory for people who want to effect change or lead others, and not everyone wants that. So, one would expect that some number of people who were bad-to-mediocre, interpersonally, in high school and college, would maintain a lasting deficit in coordinative social skill– and be perfectly fine with that.

The second is social isolation. Academia is cult-like. It’s assumed that the top 5% of undergraduate students will go on to graduate school. Except for the outlier case in which one is recruited for a high-level role at the next Facebook, smart undergraduate students are expected to go straight into graduate school. Then, to leave graduate school (which about half do, before the PhD) is seen as a mark of failure. Few students actually fail out on a lack of ability (if you’re smart enough to get in, you can probably do the work) but a much larger number lose motivation and give up. Leaving after the PhD for, say, finance is also viewed as distasteful. Moreover, while it’s possible to resume a graduate program after a leave of absence or to join a graduate program after a couple years of post-college life, those who leave the academic track at any time after the PhD are seen as damaged goods, and unhireable in the academic job market. They’ve committed a cardinal sin: they left. (“How could they?”) Those who leave academia are regarded as apostates, and people outside of academia are seen as intellectual lightweights. With an attitude like that, social isolation is expected. People who have started businesses and formed unions and organized communities could help academics get out of their self-created sand trap of irrelevance. The problem is that the ivory tower has such a culture of arrogance that it will never listen to such people.

Seem familiar?

Now, we focus on Silicon Valley and the VC virus that’s been infecting the software industry. If we view the future as linear, Silicon Valley seems to be headed not for irrelevance or failure but for the worst kind of success. Of course, history isn’t linear and no one can predict its future. I know what I want to happen. As for what will, and when? Some people thought I made a fool of myself when I challenged a certain bloviating, spoiled asshat to a rap duel– few people caught on to the logic of what I was doing– and I’m not going to risk making a fool of myself, again, by making predictions.

Software engineers, like academics, have a dreadful lack of coordinative social skill. Not only that, but the Silicon Valley system, as it currently exists, requires that. If software engineers had the collective will to fight for themselves, they’d be far better treated and be running the place, and it would be a much better world overall but the current VC kingmakers wouldn’t be happy. Unfortunately, the Silicon Valley elite has done a great job of dividing makers on all sorts of issues: gender, programming languages, the H1-B program, and so on… all the while, the well-connected investors and their shitty paradrop executive friends make tons of money while engineers get abused– and respond by abusing each other over bike-shed debates like code indentation. When someone with no qualifications other than going to high school with a lead investor is getting a $400k-per-year VP/Eng job and 1% of the equity, and engineers are getting 0.02%, who fucking cares about tabs versus spaces?

Is Silicon Valley headed down the same road as academia? I don’t know. The analogue of “research über alles” seems to be a strange attitude that mixes young male quixotry, open-source obsession– and I think that open-source software is a good thing, but less than 5% of software engineers will ever be paid to work on it, and not everyone without a Github profile is a loser– and crass commercialism couched in allusions to mythical creatures. (“Billion-dollar company” sounds bureaucratic, old, and lame; “unicorn” sounds… well, incredibly fucking immature if you ask me, but I’m not the target market.) If that culture seems at odds with itself, that’s an accurate perception. It’s intentionally muddled, self-contradictory, and needlessly divisive. The culture of Silicon Valley engineering is one created by the colonial overseers, and not by the engineers. Programmers never liked open-plan offices and still don’t like them, and “Scrum” (at least, Scrum in practice) is just a way to make micromanagement sound “youthy”.

For 1970s academia, there was no external force that tried to ruin it or (as has been done with Silicon Valley) turn it into an emasculated colonial outpost for the mainstream business elite. Academia created its own destruction, and the tenure system allowed it by enabling the arrogance of the established (which ruined the job prospects of the next generation). It was, I would argue, purely a lack of coordinative social skill, brought on by a cult-like social isolation, that did this. I would argue, though, that Silicon Valley was destroyed (and so far, the destruction is moral but not yet financial, insofar as money is still being made, just by the wrong people) intentionally. We only need examine one dimension of social skill– a lack of coordinative skill– to understand academia’s decline. In Silicon Valley, there are two at play: the lack of coordinative social skill among the makers who actually build things, and the manipulative social skills deployed by psychopaths, brought in by the mainstream business culture, to keep the makers divided over minutiae and petty drama. What this means, I am just starting to figure out.

Academia is a closed system and largely wants to be so. Professors, in general, want to be isolated from the ugliness of the mainstream corporate world. Otherwise, they’d be in it, making three times as much money on half the effort. However, the character of Silicon Valley makers (as opposed to the colonial overseers) tends to be ex-academic. Most of us makers are people who were attracted to science and discovery and the concept of a “life of the mind”, but left the academy upon realizing its general irrelevance and decline. As ex-academics, we simultaneously have an attitude of rebellion against it and a nostalgic attraction to its better traits including its “coziness”. What I’ve realized is that the colonial overseers of Silicon Valley are very adept at exploiting this. Take the infantilizing Google Culture, which provides ball pits and free massage (one per year) but has closed allocation and Enron-style performance reviews. Google, knowing that many of its best employees are ex-academics– I consider grad-school dropouts to be ex-academic– wants to create the cult-like, superficially cozy world that enables people to stop asking the hard questions or putting themselves outside of their comfort zones (which seems to be a necessary pre-requisite for developing or deploying coordinative social skills).

In contrast to academia, Silicon Valley makers don’t want to be in a closed system. Most of these engineers want to have a large impact on the world, but a corporation can easily hack them (regardless of the value of the work they’re actually doing) by simply telling them that they’re having an effect on “millions of users”. This enables them to get a lot of grunt work done by people who’d otherwise demand far more respect and compensation. This ruse is similar to a cult that tells its members that large donations will “send out positive energy waves” and cure cancer. It can be appealing (and, again, cozy) to hand one’s own moral decision-making over to an organization, but it rarely turns out well.

Fate

I’ve already said that I’m not going to try to predict the future, because while there is finitude in foolishness, it’s very hard to predict exactly when a system runs out of greater fools. I don’t think that anyone can do that reliably. What I will do is identify points of strain. First, I don’t think that the Silicon Valley model is robust or sustainable. Once its software engineers realize on a deep level just how stacked the odds are against them– that they’re not going to be CEOs inside of 3 years– it’s likely either to collapse or to be forced to evolve into something that has an entirely different class of people in charge of it.

Right now, Silicon Valley prevents engineer awakening through aggressive age discrimination. Now, ageism is yet another trait of software culture that comes entirely from the colonial overseers. Programmers don’t think of their elders as somehow defective. Rather, we venerate them. We love taking opportunities to learn from them. No decent programmer seriously believes that our more experienced counterparts are somehow “not with it”. Sure, they’re more expensive, but they’re also fucking worth it. Why does the investor class need such a culture of ageism to exist? It’s simple. If there were too many 50-year-old engineers– who, despite being highly talented, never became “unicorn” CEOs, either because of a lack of interest or because CEO slots are still quite rare– kicking around the Valley, then the young’uns would start to realize that they, also, weren’t likely enough to become billionaires from their startup jobs to justify the 90-hour weeks. Age discrimination is about hiding the 50th-percentile future from the quixotic young males that Silicon Valley depends on for its grunt work.

The problem, of course, with such an ageist culture is that it tends to produce bad technology. If there aren’t senior programmers around to mentor the juniors and review the code, and if there’s a deadline culture (which is usually the case) then the result will be a brittle product, because the code quality will be so poor. Business people tend to assume that this is fixable later on, but often it’s not. First, a lot of software is totaled, by which I mean it would take more time and effort to fix it than to rewrite it from scratch. Of course, the latter option (even when it is the sensible one) is so politically hairy as to be impractical. What often happens, when a total rewrite (embarrassing the original architects) is called, is that the team that built the original system throws so much political firepower (justification requests, legacy requirements that the new system must obey, morale sabotage) at it that the new-system team is under even tighter deadlines and suffers from more communication failures than the original team did. The likely result is that the new system won’t be any good either. As for maintaining totaled software for as long as it lives, these become the projects that no one wants to do. Most companies toss legacy maintenance to their least successful engineers, who are rarely people with the skills to improve it. With these approaches blocked, external consultants might be hired. The problem there is that, while some of these consultants are worth ten times their hourly rate, many expensive software consultants are no good at all. Worse yet, business people are horrible at judging external consultants, while the people who have the ability to judge them (senior engineers) have a political stake and therefore, in evaluating and selecting external code fixers, will affected by the political pressures on them. The sum result of all of this is that many technology companies built under the VC model are extremely brittle and “technical debt” is often impossible to repay. In fact, “technical debt” is one of the worst metaphors I’ve encountered in this field. Debt has a known interest rate that is usually between 0 and 30 percent per year; technical debt has a usurious and unpredictable interest rate.

So what are we seeing, as the mainstream business culture completes its colonization of Silicon Valley? We’ve seen makers get marginalized, we’ve seen an ageism that is especially cruel because it takes so many years to become any good at programming, and we’ve seem increasing brittleness of the products and businesses created, due to the colonizers’ willful ignorance of the threat posted by technical debt.

Where is this going? I’m not sure. I think it behooves everyone who is involved in that game, however, to have a plan should that whole mess go into a fiery collapse.

Employees at Google, Yahoo, and Amazon lose nothing if they unionize. Here’s why.

Google, Yahoo, and Amazon have one thing in common with, probably, the majority of large, ethically-challenged software companies. They use stack-ranking, also known as top-grading, also known as rank-and-yank. By top-level mandate, some pre-ordained percentage of employees must fail. A much larger contingent of employees face the stigma of being labelled below-average or average, which not only blocks promotion but makes internal mobility difficult. Stack ranking is a nasty game that executives play against their own employees, forcing them to stab each other in the back. It ought to be ended. Sadly, software engineers do not seem to have the ability to get it abolished. They largely agree that it’s toxic, but nothing’s been done about it, and nothing will be done about it so long as most software engineers remain apolitical cowards who refuse to fight for themselves.

I’ve spent years studying the question of whether it is good or bad for software engineers in the Valley to unionize. The answer is: it depends. There are different kinds of unions, and different situations call for different kinds of collective action. In general, I think the way to go is to create guilds like Hollywood’s actors’ and writers’ guilds, which avoid interfering with meritocracy with seniority systems or compensation ceilings, but establish minimum terms of work, and provide representation and support in case of unfair treatment by management. Stack ranking, binding mandatory arbitration clauses, non-competes, and the mandatory inclusion of performance reviews in a candidate’s transfer packet for internal mobility could be abolished if unions were brought in. So what stands to be lost? A couple hundred dollars per year in dues? Compared to the regular abuse that software engineers suffer in stack-ranked companies, that has got to be the cheapest insurance plan that there is.

To make it clear, I’m not arguing that every software company should be unionized. I don’t think, for example, that a 7-person startup needs to bring in a union. Nor is it entirely about size. It’s about the relationship between the workers and management. The major objections to unionization come down to the claim that they commoditize labor; what once could have had warm-fuzzy associations about creative exertion and love of the work is now something where people are disallowed from doing it more than 10 hours per day without overtime pay. However, once the executives have decided to commoditize the workers’ labor, what’s lost in bringing in a union? At bulk levels, labor just seems to become a commodity. Perhaps that’s a sad realization to have, and those who wish it were otherwise should consider going independent or starting their own companies. Once a company sees a worker as an atom of “headcount” instead of an individual, or a piece of machinery to be “assigned” to a specific spot in the system, it’s time to call in the unions. Unions generally don’t decelerate the commoditization of labor; instead, they accept it as a fait accompli and try to make sure that the commoditization happens on fair terms for the workers. You want to play stack-ranking, divide-and conquer, “tough culture” games against our engineers? Fine, but we’re mandating a 6-month minimum severance for those pushed out, retroactively striking all binding mandatory arbitration clauses in employment contracts should any wrongful termination suits occur, offering to pay legal expenses of exiting employees, and (while we’re at it) raising salaries to a minimum of $220,000 per year. Eat it, biscuit-cutters.

If unions come roaring into Silicon Valley, we can expect a massive fight from its established malefactors. And since they can’t win in numbers (engineers outnumber them) they will try to fight culturally, claiming that unions threaten to create an adversarial climate between engineers and management. Sadly, many young engineers will be likely to fall for this line, since they tend to believe that they’re going to be management inside of 30 months. To that, I have two counterpoints. First, unions don’t necessarily create an adversarial climate; they create a negotiatory one. They give engineers a chance to fight back against bad behaviors, and also provide a way for them to negotiate terms that would be embarrassing for the individual to negotiate. For example, no engineer, while he’s negotiating a job offer, can talk about about ripping out the binding mandatory negotiation clause (it signals, “I’m considering the possibility, however remote, that I might have to sue you”) or fight against over-broad IP assignments (“I plan on having side projects which won’t directly compete with you, but that may compete for my time, attention and affection”) or non-competes (“I haven’t ruled out the possibility of working for a competing firm”). Right now, the balance of power between employers and employees in Silicon Valley is so demonically horrible that simply insisting on having one’s natural and legal rights makes that prospective employee, in HR terms, a “PITA” risk and that will end the discussion right there. Instead, we need a collective organization that can strike these onerous employment terms for everyone.

When a company’s management plays stack-ranking games against its employees, an adversarial climate between management and labor already exists. Bringing in a union won’t create such an environment; it will only make the one that exists more fair. You absolutely want a union whenever it becomes time to say, “Look, we know that you view our labor as a commodity– we get it, we’re not special snowflakes in your eyes, and we’re fine with that– so let’s talk about setting fair terms of exchange”.

Am I claiming that all of Silicon Valley should be unionized? Perhaps an employer-independent and relatively lightweight union like Hollywood’s actors’ and writers’ guilds would be useful. With the stack-rank companies in particular, however, I think that it’s time to take the discussion even further. While I don’t support absolutely everything that people have come to associate with unions, the threat needs to be there. You want to stack-rank our engineers? Well, then we’re putting in a seniority system and making you unable to fire people without our say-so.

At Google, for example, engineers live in perennial fear of “Perf” and “the Perf Room”. (There actually is no “Perf Room”, so when a Google manager threatens to “take you into the Perf Room” or to “Perf you”, it’s strictly metaphorical. The place doesn’t actually exist, and while the terminology often gets a bit rapey– an employee assigned a sub-3.0 score is said to be “biting the pillow”– all that actually happens is that a number is inserted into a computerized form.) Perf scores, which are often hidden from the employee, follow him forever. They make internal mobility difficult, because even average scores make an engineer less desirable as a transfer candidate than a new hire– why take a 50th- or even 75th-percentile internal hire and risk angering the candidate’s current manager, when you can fill the spot with a politically unentangled external candidate? The whole process exists to deprive the employee of the right to state her own case for her capability, and to represent her performance history on her terms. And it’s the sort of abusive behavior that will never end until the executives of the stack-ranked companies are opposed with collective action. It’s time to take them, and their shitty behaviors, into the Perf Room for good.

Anger’s paradoxical value, and the closing of the Middle Path in Silicon Valley

Anger

Anger is a strange emotion. I’ve made no efforts to conceal that I have a lot of it, and toward such vile targets (such as those who have destroyed the culture of Silicon Valley and, by extension due to that region’s assigned status of leadership, the technology industry) that most would call it “justified”. Anger is, however, one of those emotions that humans prefer to ignore. It produces (in roughly increasing order of severity) foul language, outbursts, threats, retaliations and destroyed relationships, and frank physical violence. The fruits of anger are disliked, and not for bad reasons, because most of those byproducts are horrible. Most anger is, additionally, a passing and somewhat errant emotion; the target of the anger might not be deserving of violence, retaliation, or even insults. In fact, some anger is completely unjustified; so it’s best not to act on anger until we’ve had a chance to process and examine it. The bad kind of anger tends to be short-lived but, if humans acted on it when it emerged, we wouldn’t have made it this far as a species. Still, most of us agree that much anger, especially the long-lived kind that doesn’t go away, is justified in some moral sense. To be angry, three years later, at an incompetent driver is deemed silly. To be angry over a traumatic incident or a life-altering injustice is held as understandable.

However, is justified anger good? The answer, I would say, is paradoxical. For the individual, anger isn’t good. I’m not saying that the emotion should be ignored or “bottled in”. It should be acknowledged and let to pass. Holding on to it forever is, however, counterproductive. It’s stressful and unpleasant and sometimes harmful. As Buddha said, “holding on to anger is like grasping a hot coal with the intent of throwing it at someone else; you are the one who gets burned.” Anger, held too long, is a toxic and dreadful emotion that seems to be devoid of value– to the individual. This isn’t news. So what’s the issue? Why am I interested in talking about it? Because anger is extremely useful for the human collective.

Only anger, it often seems, can muster the force that is needed to overthrow evil. Let’s be honest: the problem has its act together. We aren’t going to overthrow the global corporate elite by beaming love waves at them. No one is going to liberate the technology industry from its Damaso overlords with a message of hope and joy alone. We can probably get them to vacate without forcibly removing them, but it’s not going to happen without a threatening storm headed their way. Any solution to any social problem will involve some people getting hurt, if only because the people who run the world now are willing to hurt other people, by the millions, in order to protect their positions.

Anger is, I’m afraid, the emotion that spreads most quickly throughout a group, and sometimes the only thing that can hold it together. Of course, this can be a force for good or for evil. Many of history’s most noted ragemongers were people did bad to the world. I would, however, say that this fact makes the argument that, if good people shy away from the job of spreading indignation and resentment, then only evil people will being doing it. For me, that’s an upsetting realization.

Whether we’re talking about “yellow journalism” or bloggers or anyone else who fights for social change, spreading anger is a major part of what they do. It’s something that I do, often consciously. The reason, when I discuss Silicon Valley’s cultural problems, for me to mention Evan Spiegel or Lucas Duplan (for the uninitiated, they are two well-connected, rich, unlikeable and unqualified people who were made startup founders) is because they inspire resentment and hatred. Dry discussions of systemic problems don’t lead to social change; they lead to more dry debate and that debate leads to more debate, but nothing ever gets done until someone “condescends” to talk to the public and get them pissed off. For that purpose, a Joffrey figure like Evan Spiegel is just much “catchier”. This is why founder-quality issues like Duplan and Spiegel, and “Google Buses”, are a better vector of attack against Sand Hill Road than the deeper technical reasons (e.g. principal-agent problems that take kilowords to explain in detail) for that ecosystem’s moral failure. It’s hard to get people riled up about investor collusion, and much easier to point to this picture of Lucas Duplan.

This current incarnation of Silicon Valley needs to be pushed aside and discarded, because it’s hurting the world. The whole ecosystem– the shitty corporate cultures with the age discrimination and open-plan fetishism, the juvenile talk about “unicorns” because it’s a cute way of covering up the reality of an industry that only cares about growth for its own sake, the insane short-term greed, the utter lack of concern for ethics, the investor collusion, and the founder-quality issues– needs to be burned to the ground so we can build something new. And I have enough talent that, while I can’t change anything on my own, I can contribute. When I (unintentionally) revealed the existence of stack-ranking at Google to the public, I damaged that company’s reputation. The degree to which I did so is probably not significant, relative to its daily swings on the stock market, but with enough people in the good fight, victory is possible.

Here’s what I don’t like. Clearly, anger is painful for the person experiencing it. As an individual, I would be better to let it pass. I can personally deal with the pain of it, but it’s leads me to question whether there is social value in disseminating it. And yet, without people like me spreading and multiplying this justified anger at the moral failure of Silicon Valley, no change will occur and evil will win. This is what makes anger paradoxical. As an individual, the prudent thing to do is to let it go. For society, moral justice demands that it spread and amplify. Even if we accept that collective anger can just as easily be a force for bad (and it can) we still have to confront the fact that if good people decline to spread and multiply anger against evil, then the sheer power of collective anger will be wielded only by evil. We need, as a countervailing force, for the good people to comprehend and direct the force of collective anger.

The Middle Path

Why do I detest Silicon Valley? I don’t live there, and I have better options than to take a pay cut in exchange for 0.03% of a post-A startup, so why does that cesspool matter to me at this point? In large part, it’s because the Bay Area wasn’t always a cesspool. It used to be run by lifelong engineers for engineers, and now it’s some shitty outpost of the mainstream business culture, and I find that devolution to be deplorable. The Valley used to be a haven for nerds (here, meaning people who value intellectual fulfillment more than maximizing their wealth and social status) and now it’s become a haven for MBA-culture rejects who go West to take advantage of the nerds. It’s a joke, it’s awful, and it’s very easy to get angry at it. But why? Why is it worth anger? Shouldn’t we divest ourselves, emotionally, and be content to let that cesspool implode?

I don’t care about Silicon Valley, meaning the Bay Area, but I do care about the future of the technology industry. Technology is just too important to the future of humanity for us to ignore it, or to surrender it to barbarians. The technology industry used to represent the Middle Path between the two undesirable options of (a) wholesale subordination to the existing elite and (b) violent revolt. It was founded by people who neither wanted to acquiesce to the Establishment nor to overthrow it with physical force. They just wanted to build cool things, to indulge their intellectual curiosities, and possibly to outperform an existing oligarchy and therefore refute its claims of meritocracy.

Unfortunately, Silicon Valley became a victim of its own success. It outperformed the Establishment and so the Establishment, rather than declining gracefully into lesser relevance, found a way to colonize it through the good-old-boy network of Bay Area venture capital. To be fair, the natives allowed themselves to be conquered. It wasn’t hard for the invaders to do, because software engineers have such a broken tribal identity and such a culture of foolish individualism that divide-and-conquer tactics (like– for a modern example that illustrates how fucked we are as a tribe, “Agile”/Scrum– which has evolved into a system where programmers rat each other out to management for free) worked easily. Programmers are, not surprisingly, prone to a bit of cerebral narcissism, and the result of this is that they lash out with more anger at unskilled programmers and bad code than against the managerial forces (lack of interest in training, deadline culture) that created the bad programmers and awful legacy code in the first place. It’s remarkably easy for a businessman to turn a group of programmers against itself, so much so that any collective action (either a labor union, or professionalization) by programmers remains a pipe dream. The result is a culture of individualism and arrogance where almost every programmer believes that most of his colleagues are mouth-breathing idiots (and, to be fair, most of them are severely undertrained). There’s a joke in Silicon Valley about “flat” software teams where every programmer considers himself to be the leader, but it’s not entirely a joke. In the typical venture-funded startup, the engineers each believe that they’ll have investor contact within 6 months and founder/CEO status inside of 3 years. (They wouldn’t throw down 90-hour weeks if it were otherwise.) By the time programmers are old enough to recognize how rarely that happens (and how even more rarely people actually get rich in this game, unless they were born into the contacts that put them on the VC side or can have them inserted in high positions in portfolio companies, allowing diversification) they’re judged as being too old to program in the Valley. That is too convenient for those in power to be attributed to coincidence.

Sand Hill Road needs to be taken down because it has blocked the Middle Path that used to exist in Silicon Valley, and that should exist, if not in that location, in the technology industry somewhere. The old Establishment might have its territory chipped away (harmlessly, most often, because large corporations don’t die unless they do it to themselves) by technology startups, and it was content to have this happen because, so often, the territory it lost was what it didn’t understand well enough to care about. The new Establishment, on Sand Hill Road, is harder to outperform because, if it sees you as a threat, it will fund your competitors, ruin your reputation, and render your company unable to function.

I don’t believe that Silicon Valley’s closing of the Middle Path will be permanent, and it’s best for all of us that it not be. I am obviously not in favor of subordination to the global elite. They are the enemy, and something will have to be done about, or at least around, them in order to reverse the corruption and organizational decay that they’ve inflicted on the world. On the other hand, I view violent revolution as an absolute last resort. Violence is preferable to subordination and defeat, but nonetheless it is usually the absolute worst possible way to achieve something. Disliking the extremes, I want the moderate approach: effective opposition to the enemies of progress, without the violence that so easily leads to chaos and the harm of innocents. So when the mainstream business elite enters a space (like technology) in which it does not belong, colonizes it, and thereby blocks the Middle Path, it’s a scary proposition. Of course I cannot predict the future, but I can perceive risks; and the closing of the Middle Path represents too much of a risk for us to allow it. If the Middle Path has closed in venture-funded technology in the Valley, it’s time to move on to something else.

Do I think that humanity is doomed, simply because a man-child oligarchy in one geographical area (“Silicon Valley”) has closed the Middle Path when it existed in their location? Of course not. Among those in the know, the VC-engorged monstrosity that now exists in the Valley has ceased to inspire, or even to lead. It seems, then, that it is time to move past it, and to figure out where to open a new Middle Path.

If getting people to do this– to recognize the importance of doing this– requires a bit of emotional appeal along a vector such as anger or resentment, I’ll be around and I know how to pull it off.

Technology is run by the wrong people

I have a confession to make: I have a strong tendency to “jump”, emotionally and intellectually, to the biggest problem that I see at a given time. I’ve tempered it with age, because it’s often counterproductive. In organizational or corporate life, solving the bigger problem, or jumping to the biggest problem that you have the ability to solve, often gets you fired. Most organizations demand that a person work on artificial small problems in a years-long evaluative period before he gets to solve the important problems, and I’ve personally never had the patience to play that game (and it is a political game, and recognizing it as such has been detrimental, since it would be harder to resent it had I not realized what it was) at all, much less to a win. The people who jump to the biggest problem are received as insubordinate and unreliable, not because they actually are unreliable, but because they reliably do something that those without vision tend both not to understand, and to dislike. There are too many negative things (whether there is truth or value in them, or not) that can be said, in the corporate theater, about a person who immediately jumps to the biggest problem– she only wants to work on the fun stuff, she’s over-focused and bad at multi-tasking, she’s pushy and obsessive, she wants the boss’s boss’s job and isn’t good at hiding it– and it’s only matter of time before many of them are actually said.

Organizations need people like this, if they wish to survive, and they know this; but they also don’t believe that they need very many of them. Worse yet, corporate consistency mandates that the people formally trusted (i.e. those who negotiated for explicitly-declared trust in the form of job titles) be the ones who are allowed to do that sort of work. The rest, should they self-promote to a more important task than what they’ve been assigned, are considered to be breaking rank and will usually be fired. People dislike “fixers”, especially when their work is what’s being fixed. It’s probably no surprise, then, that modern organizations, over time, become full of problems that most people can see but no one has the courage to fix.

Let’s take this impulse– attack the bigger problem or, better yet, find an even bigger one– and discuss the technology industry. Let’s jump away from debates about tools and get to the big problems. What is the biggest problem with it? Tabs versus spaces? Python versus Ruby? East Coast versus West versus Midwest? Hardly. Don’t get me wrong: I enjoy debating the merits and drawbacks of various programming languages. I mean, I may not like the language Spoo as much as my favored tools, but I’d never suggest that the people promoting Spoo are anything but intelligent people with the best intentions. We may disagree, but in good faith. Except in security, discussion of bad-faith players and their activity is rare. It’s almost taboo to discuss that they exist. In fact, Hacker News now formally censors “negativity”, which includes the assertion or even the suggestion that there are many bad actors in the technology world, especially in Silicon Valley and even more especially at the top. But there are. There is a level of power, in Silicon Valley, at which malevolent players become more common than good people, and it’s people at that level of power who call the most important shots. If we ignore this, we’re missing the fucking point of everything.

There is room for this programming language and that one. That is a matter of taste and opinion, and I have a stance (static as much as possible) but there are people of equal or superior intellectual and moral quality who disagree with me. There is room for functional programming as well as imperative programming. Where there is no nuance (unless one is a syphilitic psychopath) is on this statement: technology, in general, is run by the wrong people. While this claim (“wrong people”) is technically subjective just as far as color is technically subjective, we can treat it as a working fact, just as the subjectivity of color does not excuse a person running a red light under the argument that he perceived it as green. Worse, the technology industry is run by bad people, and by bad, I don’t mean that they are merely bad at their jobs; I mean that they are unethical, culturally malignant, and belong in jail.

Why is this? And what does it mean? Before answering that, it’s important to understand what kind of bad people have managed to push themselves into the top ranks of the technology industry.

Sadly, most of the people who comprise the (rising, and justified) anti-technology contingent don’t make a distinction between me and the actual Bad Guys. To them, the $140k/year engineers and the $400k/year VP/NTWTFKs (Non-Technical Who-The-Fuck-Knows) getting handed sinecures in other peoples’ companies by their friends on Sand Hill Road are the same crowd. They perceive classism and injustice, and they’re right, but they’re oblivious to the gap between the upper-working-class engineers who create technological value (but make few decisions) and the actually upper class pedigree-mongers who capture said value (and make most of the decisions, often badly) and who are at risk of running society into the ground. (If you think this is an exaggeration, look at house prices in the Bay Area. If these fuckers can’t figure out how to solve that problem, then who in the hell can trust them to run anything bigger than techie cantrips?) Why do the anti-technology protestors fail to recognize their true enemies, and therefore lose an opportunity to forge an alliance with the true technologists whose interests have also been trampled by the software industry’s corporate elite? Because we, meaning the engineers and true technologists, have let them.

As I see it, the economy of the Bay Area (and, increasingly, the U.S.) has three “estates”. In the First Estate are the Sand Hill Road business people. They don’t give a damn about technology for its own sake, and they’re an offshoot of the mainstream business elite. After failing in private equity or proving themselves not to be smart enough to do statistical arbitrage, they’re sent West to manage nerds, and while they’re poor in comparison to the hedge-fund crowd, they’re paid immensely by normal-people (or even Second Estate) standards. As in the acronym “FILTH” (Failed In London, Try Hong (k)ong), they are colonial overseers who weren’t good enough for leadership positions in the colonizing culture (the mainstream business/MBA culture) so they were sent to California to wave their dicks in the air while blustering about “unicorns“. In the Second Estate are the technologists and engineers who actually write the code and build the products; their annual incomes tend to top out around $200,000 to $300,000– not bad at all, but not enough to buy a house in the Bay Area– and becoming a founder is (due to lack of “pedigree”, which is a code word for the massive class discrepancy between them and the VCs they need to pitch) is pretty much out of the question. The Third Estate are the people, of average means, who feel disenfranchised as they are priced out of the Bay Area. They (understandably) can’t quite empathize with Second-Estate complaints about the cost of housing and pathetic equity slices, because they actually live on “normal people” (non-programmer, no graduate degree) salaries. As class tensions have built in San Francisco, the First Estate has been exceptionally adept at diverting Third-Estate animosity toward the Second, hence the “Google Bus” controversies. This prevents the Second and Third Estates from realizing that their common enemy is the First Estate, and thereby getting together and doing something about their common problem.

This echoes a common problem in technology companies. If a tech-company CEO in France or Germany tried to institute engineer stack ranking, an effigy would be burned on his own front lawn, his vehicle would be vandalized if not destroyed, and the right thing would happen (i.e., he’d revert the decision) the next day. An admirable trait that the European proletariat has, and that the American one lacks, is an immunity to divide-and-conquer tactics. The actual enemies of the people of San Francisco are the billionaires who believe in stack ranking and the NIMBYs, not 26-year-old schlubs who spend 3 hours per day on a Google bus. Likewise, when software engineers bludgeon each other over Ruby versus Java, they’re missing the greater point. The enemy isn’t “other languages”. It’s the idiot executive who (not understanding technology himself, and taking bad advice from a young sociopath who is good at pretending to understand software) instituted a top-down one-language policy that was never needed in the first place.

Who are the right people to run technology, and why are the current people in charge wrong for the job? Answering the first question is relatively easy. What is technology? It’s the application of acquired knowledge to solve problems. What problems should we be solving? What are the really big problems? Fundamentally, I think that the greatest evil is scarcity. From the time of Gilgamesh to the mid-20th century, human life was dominated by famine, war, slavery, murder, rape and torture. Contrary to myths about “noble savages”, pre-industrial men faced about a 0.5%-per-year chance of death in violent conflict. Aberrations aside, most of horrible traits that we attribute to “human nature” are actually attributable to human nature under scarcity. What do we know about human nature without scarcity? Honestly, very little. Even the lives of the rich, in 2015, are still dominated by the existence of scarcity (and the need to protect an existence in which it is absent). We don’t have a good idea of what “human nature” is when human life is no longer dominated either by scarcity or the counter-measures (work, sociological ascent) taken to avoid it.

The goal of a technologist is to make everyone rich. Obviously, that won’t happen overnight, and it has to be done in the right way. It’s better to do it with clean energy sources and lab-grown meat than with petroleum and animal death. The earth can’t afford to have people eating like Americans and able to fly thousands of miles per year until certain technological problems are solved (and I, honestly, believe that they can be solved, and aren’t terribly difficult). We have a lot of work to do, and most of us aren’t doing the right work, and it’s hard to blame the individual programmer because there are so few jobs that enable a person to work on fundamental problems. Let’s, however, admit to a fundamental enemy: scarcity. Some might say that death is a fundamental enemy, especially in the Singularitarian crowd. I strongly disagree. Death is an unknown– I look forward to “the other side”, and if I am wrong and there is nothing on the other side, then I will not exist to be bothered by the fact– but I see no reason to despise it. Death will happen to me– even a technological singularity can only procrastinate it for a few billion years– and that is not a bad thing. Scarcity, on the other hand, is pretty fucking awful– far more deserving of “primal enemy” status than death. If scarcity in human life should continue indefinitely, I don’t want technological life extension. Eighty years of a mostly-charmed life in a mostly-shitty world, I can tolerate. Two hundred? Fuck that shit. If we’re not going to make major progress on scarcity in the next fifty years, I’ll be fucking glad to be making my natural exit.

Technologists (and, at this point, I’m speaking more about a mentality and ideology than a profession, because quite a large number of programmers are anti-intellectual fuckheads just as bad as the colonial officers who employ them) are humanity’s last resort in the battle against scarcity. Scarcity has been the norm, along with the moral corrosion that comes with it, for most of human history, and if we don’t kill it soon, we’ll destroy ourselves. We learned this in the first half of the 20th century. Actual scarcity was on the wane even then, because the Industrial Revolution worked; but old, tribalistic ideas– ideas from a time when scarcity was the rule– caused a series of horrendous wars and the deployment of one of the most destructive weapons ever conceived. We ought to strive to break out of such nonsense. There will always be inequalities of social status, but we ought to aim for a world in which being “poor” means being on a two-week waiting list to go to the Moon.

Who are the right people to run technology? Positive-sum players. People who want to make everyone prosperous, and to do so while reducing or eliminating environmental degradation. I hope that this is clear. There are many major moral issues in technology around privacy, safety and security, and our citizenship in the greater world. I don’t mean to make light of those. Those are important and complicated issues, and I won’t claim that I always have the right answer. Still, I think that those are ancillary to the main issue, which is that technology is not run by positive-sum players. Instead, it’s run by people who hoard social access, damage others’ careers even when there is little to gain, and play political games against each other and against the world.

To make it clear, I don’t wish to identify as a capitalist or a socialist, or even as a liberal or conservative. The enemy is scarcity. We’ve seen that pure capitalism and pure socialism are undesirable and ineffective at eliminating it; but if it were otherwise, I’d welcome the solution that did so. It’s important to remember that scarcity itself is our adversary, and not some collection of ideas called an “ideology” and manufactured into an “other”. I don’t think that one needs to be a liberal or leftist necessarily in order to qualify as a technologist. This is about something different than the next election. This is about humanity and its long-term goals.

All of that said, there are people in society who prosper by creating scarcity. They keep powerful social organizations and groups closed, they deliberately concentrate power, and they excel at playing zero-sum games. And here’s the problem: while such people are possibly rarer than good-faith positive-sum players, they’re the ones who excel at organizational politics. They shift blame, take credit, and when they get into positions of power, they create artificial scarcity. Why? Because scarcity rarely galvanizes the have-nots against the haves; much more often, it creates chaos and distrust and divides the have-nots against each other, or (as in the case of San Francisco’s pointless conflict between the Second and Third Estates) pits the have-a-littles against the have-nothings.

Artificial scarcity is, in truth, all over the place in corporate life. Why do some people “get” good projects and creative freedom while others don’t? Why are many people (regardless of performance and the well-documented benefits of taking time off) limited to two or three weeks of vacation per year? Why is stack ranking, which has the effect of making decent standing in the organization a limited resource, considered morally acceptable? Why do people put emotional investment into silly status currencies like control over other peoples’ time? It’s easy to write these questions off as “complex” and decline to answer them, but I think that the answer’s simple. Right now, in 2015, the people who are most feared and therefore most powerful in organizational life are those who can create and manipulate the machinery of scarcity. Some of that scarcity is intrinsic. It is not an artifact of evil that salary pools and creative freedom must fall under some limit; it is the way things are. However, an alarming quantity of that scarcity is not. How often is it that missing a “deadline” has absolutely no real negative consequence on anything– other than annoyance to a man-struating executive who deserves full blame for inventing an unrealistic timeframe in his own mind? Very. How many corporations would suffer any ill effect if their stack ranking machinery were abolished? Zero, and many would find immediate cultural improvements. Artificial scarcity is all over the place because there is power to be won by creating it; and, in the corporate world, those who acquire the most power are those who learn how navigate environments of artificial scarcity, often generating it as it solidifies their power once gained.

Who runs the technology industry? Venture capitalists. Even though many technology companies are not venture-funded, the VC-funded companies and their bosses (the VCs) set the culture and they fund the companies that set salaries. Most of them, as I’ve discussed, are people who failed in the colonizing culture (the mainstream MBA/business world) and went West to boss nerds around. Having failed in the existing “Establishment” culture, they (somewhat unintentionally) create a new one that amplifies its worst traits, much in the way that people who are ejected from an established and cool-headed (in relative terms) criminal organization will often found a more violent one. So they’ve taken the relationship-driven anti-meritocracy for which the Harvard-MBA world is notorious, and then they went off and made a world (Sand Hill Road) that’s even more oligarchical, juvenile, and chauvinistic than the MBA culture that it splintered off from. Worse than being zero-sum players, these are zero-sum players whose being rejected by the MBA culture (not all of whose people are zero-sum players; there are some genuine good-faith positive-sum players in the business world) was often due to their lack of vision. And hence, we end up with stack ranking. Stack ranking would exist except for the fact that many technology companies are run by “leftover” CEOs and VCs who couldn’t get leadership jobs anywhere else. And because of the long-standing climate of terrible leadership in this industry, we end up with Snapchat and Clinkle but scant funding for clean energy. We end up with a world in which most software engineers work on stupid products that don’t matter.

In 2015, we live in a time of broad-based and pervasive organizational decline. While Silicon Valley champions all that is “startup”, another way to perceive the accelerated birth-and-death cycle of organizations is that they’ve become shorter-lived and more disposable in general. Perhaps our society is reaching an organizational Hayflick limit. Perhaps the “macro-age” of our current mode of life is senescent and, therefore, the organizations that we are able to form undergo rapid “micro” aging. There is individual gain, for a few, to be had in this period of organizational decay. A world in which organizations (whether young startups or old corporate pillars) are dying at such a high rate is one where rapid ascent is more possible, especially for those who already possess inherited connections (because, while organizations themselves are much more volatile and short-lived, the people in charge don’t change very often) and can therefore position themselves as “serial entrepreneurs” or “visionary innovators” in Silicon Valley. What is being missed, far too often, is that this fetishized “startup bloom” is not so much an artifact of good-faith outperformance of the Establishment, but rather an opportunistic reaction to a society’s increasing inability to form and maintain organizations that are worth caring about. Wall Street and Silicon Valley both saw that mainstream Corporate America was becoming inhospitable to people with serious talent. Wall Street decided to beat it on compensation; Silicon Valley amped up the delusional rhetoric about “changing the world”, the exploitation of young, male quixotry, and the willingness to use false promises (executive in 3 years! investor contact!) to scout talent. That’s where we are now. The soul of our industry is not a driving hatred of scarcity, but the impulse to exploit the quixotry of young talent. If we can’t change that, then we shouldn’t be trusted to “change the world” because our changes shall be mostly negative.

Technology must escape its colonial overseers and bring genuine technologists into leading roles. It cannot happen fast enough. In order to do, it’s going to have to dump Sand Hill Road and the Silicon Valley economy in general. I don’t know what will replace it, but what’s in place right now is so clearly not working that nothing is lost by throwing it out wholesale.