Malfragmentation

Paul Graham has been saying a lot of dumb things, of late, and since he’s rich, those things get taken more seriously than they deserve. I’ve decided that his recent essay on “Refragmentation” is worth some kind of comment. He documents some noted historical changes that have been to his personal benefit, and quite accurately, then poses reasons for those changes that are self-serving and bizarre.

What is “refragmentation”? As far as I can tell, it’s the unwind of the organizational high era in the mid-20th century. In 1950, organizations were strong and respected. Large corporations were beloved, the U.S. government was held in high regard– it had just beaten the Nazis, after all– and people could expect lifelong employment at their companies. There was, in some way, a sense of national unity (unless you were black, or gay, or a woman who wanted more than the cookie-cutter housewife life) that some people miss in the present day. Economic inequality was low, and so was social inequality. Top students from public schools in Michigan actually could go to Harvard without getting recommendations from U.S. Senators. There are some things to miss about this era, and there’s quote a lot not to miss. I’d rather be in this time, but how that period is viewed may be somewhat irrelevant, because there’s no hope of going back to it.

In 2016, organizations are viewed as feeble, narrow-minded, and corrupt. We don’t really trust schools to teach, or our politicians to serve our interests rather than their own, and we certainly don’t trust our employers to look out for our economic interests. Unions and the middle-class jobs they’ve created have been on the wane for decades, and most of our Democrats are right-wingers compared to Eisenhower and even Nixon. In a time of organizational malaise, people burrow into small cultural islands that are mostly expressions of discontent. When I was a teenager, we had the “goths” and the “skaters” and the “nerds” and the “emo kids”. In adulthood, we have apocalyptic religious movements (the percentage of Americans who believe the End Times will occur in their lives is shockingly high) and anti-vaccine crusaders and wing-nuts from the left and the right. There’s good and bad in this. To the good, we’ve abandoned the notion that conformity is a virtue. To the bad, we’ve lost all sense of unity and all hope in our ability to solve collective problems, like healthcare. Rather than build a strong national system like Britain’s NHS, we’ve kept this hodge-podge of muck alive and, with an individual mandate to buy the lousy product on offer, made it worse and far more expensive.

Is this a refragmentation? Culturally, it appears that we’ve experienced one, and culturally, it’s arguably a good thing. Who wants to listen to the same 40 pop songs over and over? I don’t. The self-serve, find-your-own-tribe culture of the 2010s is certainly an improvement over the centralized one of, say, 1950s television.

There are benefits to “fragmentation”, which is why we see strength in systems that enable it. The United States began as a federalistic country with a weak national government, most powers left to the states, in order to allow experimentation and local sensibility rather than central mandate. There are also use cases that mandate unity and coordination. Ultimately, some compromise will be found. Case in point: time zones. Before the railroads were built, time was a local affair with major cities defining “12:00” as solar noon and smaller cities using the time of the nearest metropolis. This was, needless to say, a mess. It means that 11:00 in New York will be 10:37 in Pittsburgh and 11:12 in Boston, and who wants to keep track of all that? Time zones allowed each location to keep a locally relevant time, one usually within 30 minutes of what would be accurate, while imposing enough conformity to prevent total chaos, as would exist in the most fragmented policy toward time.

What I see in the corporate world, on the other hand, is malfragmentation. By this, I mean that there is a system that preserves the negatives of the old organizational high era, while losing its benefits.

This “worst of both worlds” dynamic shouldn’t be surprising, when one considers what post-1980 corporate capitalism really is. It’s neither capitalism nor socialism, but a hybrid that gives the benefits of both systems to a well-connected elite– a social elite often called “the 1 percent”, but arguably even smaller– and the negatives of each to the rest. Take air travel, for one example of this hybridization: we get Soviet quality of service and reliability, but capitalism’s price volatility and nickel-and-diming. Corporate life is much the same. Every corporation has a politburo called “management” whose job is to extract as much work as possible (“from each, according to his ability”) from the workers while paying as little as possible (“to each, according to his need”). Internally, these companies run themselves like command economies, with centrally-planned headcount allowances and top-down initiatives. Yet, the workers (unlike executives) are left completely exposed to the vicissitudes of the market, being laid off (or, in many of these sleazy tech companies that refuse to admit to a layoff, fired “for performance”) as soon as the organization judges it to be convenient. The global elite has managed to give itself capitalism’s upside and socialism’s protection against failure (with their connections, they will always be protected from their own incompetence) while leaving the rest of society with the detriments of the two systems. This chimeric merging of two ideologies or cultural movements is something that they’re good at. They’ve done it before. And so it is with malfragmentation.

Under malfragmentation, the working people are constantly divided. Sometimes the divisions are based on age or social class or (in the lower social classes) race, and sometimes they’re based on internal organizational factors, like the decision that one set of workers is “the star team” and that the rest are underperformers. To be blunt about it, workers are often at fault for their own fragmentation, since they’ll often create the separations on their own, without external help. Let’s use programming, since it’s what I’m most familiar with. You have open-plan jockeys and “brogrammers” who want to drive out the “old fogeys” who insist on doing things properly, you have flamewars on Twitter about whether technology is hostile or is not hostile toward minorities, you have lifelong learners complaining about philistines who don’t have side projects and stopped learning at 22 and the philistines whining about the lifelong learners attending too many conferences, you have Java programmers bashing Rubyists and vice versa, and so on. All of these tiresome battles distract us from fighting our real enemy: the colonizers who decided, at some point, that programmers should be treated as business subordinates. I’m not a fan of Java (on its technical merits) or brogrammers (ick) but let’s put that stuff aside and focus on the war that actually matters.

The fragmentation within programming culture suits the needs of our colonizers, because it prevents us from banding together to overthrow them. Ultimately, we don’t need the executive types; we could do their jobs easily and better than they do their jobs, and they’re not smart enough to do ours. Yet, with all of our cultural divisions and bike-shedding conflicts, we end up pulling each other down. Rather than face the enemy head-on, we cling to designations that make us superior (“I’m an $XLANG programmer, unlike those stupid $YLANG-ists”) and tacitly assert that the rest of us deserve to be lowly business subordinates. In the Philippines, this is given the name of “crab mentality”, referring to the tendency of trapped crabs in a bucket to be unable to escape because, as soon as one seems to be getting out, the others pull it back in. It’s absurd, self-defeating, and keeps us mired in mediocrity.

So what makes this malfragmentation rather than simply fragmentation? Our class enemies aren’t divided. They’re quite united. They share notes, constantly, whether about wages or which individuals to put on “union risk” blacklists that can make employment in Silicon Valley very difficult. Venture capitalists have created such a strong culture of co-funding, social proof, and note-sharing (“let’s make a list of the senior boys with the cutest butts!”) that each entrepreneur only gets one real shot at making his entree into the founder class. I’ve experienced much of this nonsense first-hand, such as when an investor in Quora (almost certainly associated with Y Combinator) threatened that Quora would become “unfundable” unless they banned me immediately from the site. (This was in retaliation toward my tongue-in-cheek invitation of Paul Graham to a rap battle.) The bad guys work together, and fragmentation is for the proles. That’s how malfragmentation works.

You see the malfragmentatory tendency within organizations, indicating that it might be some natural pattern of human behavior. The “protect our own” impulse is very strong, and many groups in authority prefer to “handle matters internally” (which sometimes means, “not at all”). The mid-2010s protests against police brutality have been sparked, in large part, due to a public that is fed up with police departments that seem willing to protect their worst officers. Corporate management is similar, both within and between companies. A negative reference from a manager is often fatal to one’s job candidacy, not because what was said in that reference is believed to be true (a rational person knows that it’s usually not) but because a person who scuffled with a manager is likely to be viewed negatively by other managers, even in different companies. Managers protect their own, and programmers are the opposite– almost too eager to rat each other out to management over tabs-versus-spaces nonsense– and that’s why programmers end up on the bottom.

Elites coalesce, and the proles fragment, and when this matter is brought up, skeptics accuse the person making this sort of statement as harboring a “Conspiracy Theory”. Now, here’s the thing about conspiracy. It exists. No, there’s no Illuminati and there’s no “room where they all meet”. That’s a fantasy. If there were a “room where they all meet”, one could plant a bomb in said room and liberate humanity from its shadowy overlords and icy manipulators… and that’s obviously not the case. The upper-case-C “Conspiracy” doesn’t exist, while lower-case-c conspiracies form and dissolve all the time. Of course, the people forming these don’t think of themselves as “conspirators”, because most of them don’t have any sense of right or wrong in the first place; to them, they’re just trading favors and working together. What we call “abuses of power”, they just call, “power”. Although they are individually far too selfish to pull off the grand Conspiracies of folklore, and there’s plenty of in-fighting within any elite, they’re more than capable when circumstances need them to work together to put down the proles.

It’s easy to understand why elites coalesce: they have something to defend, and their episodes of cooperation don’t require absurd loyalty to a “Conspiracy” when mere selfishness (the desire to stay within, or get further into, an in-crowd) suffices. Why, on the other hand, are proles driven toward fragmentation? Do their overseers deliberately encourage it? To some degree, that happens, but that sort of influence doesn’t seem to be needed. They’ll fragment on their own. This seems to happen because of a culture whose individualism is borne of a sort of social defeatism. We’ve given up on making the collective lot better, so we’ve accepted the low status of the worker as a permanent affair, and we fragment ourselves out of a desire for differentiation. We might accept that programmers in some other language “deserve” to be Scrum-drones on 0.05%, because we’re $XLANG programmers and so much smarter. We’ve given up on the idea that programming might deserve to be a genuine profession where Scrum-drones don’t even get in.

I’ve written before about how Paul Graham is bad for the world. The irony of his piece on “Refragmentation” is that he’s an ultimate source of malfragmentation. He has coalesced the startup world’s founder class, making it far easier for those included in the Y Combinator in-crowd to share social resources (such as contacts into the investor class) and how-to advice on pulling off the unethical business practices for which VC-funded startups are so well known. He decries the old establishment because he wasn’t a part of it, while proposing that the even-worse proto-establishment that has emerged in Silicon Valley is somehow superior because, on paper, it “looks” distributed and “fragmented”. There is an appearance of competition between sub-sectors of the elite that keeps the worker class from figuring out what’s really going on.

This ties in, ultimately, to something much larger than Paul Graham: the Silicon Valley brand. There’s a bill of pretty rotten goods being sold, in order to exploit the middle-class myth that achieving wealth requires starting one’s own business. Of course, achieving extreme wealth almost certainly does require that, but (a) most people would like to strive for reasonable comfort, first, and worry about wealth later, and (b) very few people (meaning, less than 1 percent) who start businesses achieve that status. Thus, a “tech founder” career is sold to people who don’t know any better. In the two-class (investors vs. everyone else) Silicon Valley, it was many founders who were the marks; but in the post-2000 three-class Valley (investors, founders, workers) the game is being played against the workers, with the founders’ assistance. Founders, even if they fail, are permitted to achieve moderate wealth (through acqui-hires and executive sinecures at their investors’ portfolio companies) as a “performance” bonus if they keep up the ruse; they are, in the post-modern meta-company of Silicon Valley, its middle managers. It’s the employees who are being conned, being told that they’re 2-4 years from entree into the founder class when, realistically, they’re about as likely to become “founder material” as they are to win the Powerball. Not only will writing great code never get a programmer introduced to investors, but it will encourage the founders to make sure such an introduction never happens, lest they lose him to another company, or to his own.

The malfragmented Silicon Valley has its worker class laboring under the illusion that they’re working for independent, emerging small businesses when, in fact, they’re working for one of the worst big companies– the Sand Hill Road investor class and the puppet-leader founder class– to have come along in quite a long time. It’s one that carries the negatives of old-style corporate oligopoly, but abandons the positives (such as the employer-side obligations of the old “social contract”). It’s unclear to me whether this ruse is sustainable, and I hope that it isn’t.

Corporate ageism is collective depression

This article crossed my transom recently. It’s about the difficulties that older (here, over 50) women face in finding work. Older men don’t have it easy, either, and in the startup world, it’s common for the ageism to start much earlier. Influential goofball Paul Graham famously said that 38 is “too old” to start a company, despite ample evidence to the contrary.

I’m 32, so I’m not “old” yet, by most definitions, but it’s time to speak bluntly about the age prejudice, especially in software. It’s stupid. To the extent that the ageism is intentional, it’s malevolent. One side benefit that Silicon Valley’s powerbroker gain from age discrimination is two-fold: the younger are easier to take advantage of, and the artificial time pressure put on these people by the ageist culture makes them doubly so. For the rest of us, it’s a raw deal. We know that people don’t lose the ability to program– and, much more importantly, to write good code– as they age. The ageism doesn’t come from technologists; it comes from our colonizing culture. It’s time to kill it. The first step is to recognize corporate ageism for what it is, and to understand why it exists.

I had a phase of my life, like many people of talent, where I spent far too much time studying “IQ” and intelligence testing. There’s a whole wad of information I could put here, but the relevant topic is age, and the truth is that no one really knows when humans peak. Barring dementia and related health problems, which I’ll get back to, the argument can be made for a “peak age” as early as 20 or as late as 70. That’s not so much because “intelligence” is subjective (it’s more objective than people want to admit) but because the curve in healthy individuals is just very flat in adulthood, meaning that measurement will be dominated by random “noise”. Of course, some individuals peak early and some peak late; in the arts and mathematics, there are those who did their best work before 25 and others who did their best work in old age, but the overall average shows a rather flat profile for intellectual capability in adulthood.

In this case, why is there ageism in Corporate America? If intellectual ability isn’t supposed to decline, and one’s amount of experience only increases, shouldn’t older workers be the most desired ones? This analysis discounts one factor, which I find to be a common but under-acknowledged player: depression. And depression certainly can (if temporarily) impede creativity and reduce observable intelligence.

Midlife depression (possibly subclinical) seems to be a natural byproduct of the corporate game. The winners are exhausted and (excluding the born psychopaths, who might be immune to this effect) disgusted by the moral compromise required to gain their victories. The losers are demoralized and bitter. This is utterly predictable, because the harem-queen game, as played for millennia, is largely driven by the objective of making one’s opponents too depressed to continue in the competition. Even in the 21st century, when there’s no rational reason why office work should be stressful (it doesn’t improve the quality of the work) we see human nature driving toward this effect. The end result is that corporate midlifers tend, as a group, to be bitter, burned-out, defensive and miserable.

This isn’t immutable or natural. I can find absolutely no evidence of a natural reason why midlife, viewed positively in other cultures, would have such a high rate of burnout and depression. Yet, that such a thing exists, I would argue, is observably true. Most people over 40 in tech, excluding those in executive roles and those in elite programmer positions (which are more like R&D jobs, and usually entail VP/Director-equivalent titles) are miserable to be there. Does this merit not hiring such people? I doubt it. People can change radically according to context, and negative experiences seem more likely to strengthen people in the long run than to deplete them (even if the short-term effect is otherwise). Having dealt with mood disorders myself, I don’t stigmatize negative moods or view people as inferior for being unhappy sometimes. (In fact, the American social requirement to be, or to seem, constantly happy is one that I find utterly obnoxious. Fuck that shit in its eye.) I’d rather hire a 45-year-old who’s been burned out and miserable, and gotten through it, than a happy, wealthy 22-year-old who’s never felt adversity… but, alas, I’m not most people.

Corporate America is sometimes decried for “worshipping youth”. I don’t agree. Well-connected, rich kids get a different experience, but an average 22-year-old is not going to be tapping the benefits of fresh-faced youth. Instead, he’s going to be assigned the lowliest of the grunt work and given the benefit of the doubt most rarely. Ageism hurts the young and the old, and arguably has the flavor (if not direct influence) of a divide-and-conquer dynamic encouraged by the corporate owners– to keep the workers slugging each other over something as meaningless as when one was born, instead of working together to overthrow the owning class. Corporate America despises youth and age in different ways. Excluding the progeny of the well-connected “1 percent”, who get shunted into protege roles, the young are distrusted and their motives are constantly questioned. Pushing back against an unreasonable request is taken as an expression of “entitlement” and a young worker who arrives late is assumed to have been drinking the night before, rather than having an acceptable, adult reason (kids, commute, elder care, illness) for the lateness. If there is a group, in the corporate world, that is more despised among the young and the old, it’s clearly the young. The ageism of the corporate world toward older workers, instead, is more an acknowledgement of what that world does to people. It burns them out, leading to midlife depression (again, often subclinical) being common enough that even highly talented older workers struggle to overcome the associated stigma with their age. The corporate world knows that 20 years of residence within it causes depression and (almost certainly, temporary) cognitive decline. While this decline would probably be completely and quickly reversible by improving the context (that is, by investing in a better culture), that is a change that very few companies are willing to make.

The corporate world has decided to view “too much” experience negatively. That doesn’t apply only to chronological age. It can also apply to “too many jobs” or “having seen too much” or having failed before. Why is that? Why do negative and even average experiences (that might, in a less fucked-up culture, be viewed as a source of wisdom) carry a stigma? I can’t answer that for sure, but I think that a major part of it is that we, as a culture, aren’t merely depressed individually (for some people) in midlife. It’s deeper. We’re depressed about midlife, and about aging, and therefore about the future. We’re depressed because we’ve accepted a system that inflicts needless depression and anxiety on people, and that probably wouldn’t be any less economically productive without those side effects. We’re depressed because our extremist individualism leaves us seeing a future of near-term demise and our nihilism leaves us convinced (despite scant evidence either way) that there can be nothing after one’s own physical death. This leads us to tolerate a corporate miasma that depletes people, purposelessly, because we view emotional and intellectual decline in midlife as “normal”, when it very much isn’t.

Amid the shuffling stupidity of private-sector bureaucracy, there are flashes of insight. While I find corporate ageism morally reprehensible on many levels, there is a small degree of correctness in it. Corporate residency harms and depletes people, often delivering no benefit to company or person, because that is the nature of humans when locked in a certain type of competition for resources. Ageism is the corporate system’s rejection of the experiences it provides, and therefore an acknowledgment by Corporate America that it is parasitic and detrimental. Entities with pride tend to value (and sometimes over-value, but that’s another debate) the experiences that they’ve produced for people, and the corporate world’s tendency toward the opposite shows an admissible lack of pride. One could argue that it, itself, lives under the fog of status anxiety, nihilism, and depression that it creates for those who live within it.

Insights into why a contravariant type can’t be a Haskell Functor

In learning Haskell’s core types, type classes, and concepts, one often finds counterexamples useful in learning what these abstract concepts “really are”. Perhaps one of the most well-understood type class hierarchies is Functor-Applicative-Monad. We encounter Functor in the context of things that are “mappable”. It’s a producer, and using fmap we can “post-transform” all of the things that it produces in a way that preserves any underlying structure.

We quickly become familiar with a number of useful functors, like Maybe, [], (->) r, IO. Functors seem to be everywhere. So, then, one asks, what might be a parameterized type that’s not a functor? The go-to example is something that’s contravariant in its type parameter, like a -> Int, as a type-level function of parameter a. There doesn’t seem to be any useful way to make it a Functor in a. That said, seemingly “useless” type-level artifacts are often quite useful, like the Const Functor, defined as Const r a ~= r (the a is a phantom type and functions being fmaped are ignored). So usefulness in this world isn’t always intuitive. Still, it remains true that something like a -> Int is not a functor? Why?

Let’s work with the type a -> Bool, just because it’s the simplest example of what goes wrong. Is there a way to make a Functor instance for that? To me, it’s not intuitive that that thing isn’t a Functor. It’s “almost” a set (stepping around, for a moment, debates about what is a set) and Set, meaning the actual collection type in Data.Set is “almost” a Functor. It’s not one, because a Functor demands that the mapping be structure-preserving, which a set’s notion of mapping is not (for example, if you map the squaring function on the set {2, -2}, you get a smaller set, {4}). There are many ways to get at the point that Set is not a Functor, most relying on the fact that Set a‘s methods almost invariably require Eq a and Ord a. But what about the type-agnostic “(<-) a” (my notation) above? It places no such constraints on a.

To answer this, let’s try to create the method, fmap, on which Functor relies.


-- pretend Haskell supports (<-) b a as a syntax for (->) a b = a -> b

instance Functor (<-) Bool where

-- fmap :: (a -> b) -> f a -> f b
-- in this case, (a -> b) -> (a -> Bool) -> (b -> Bool)
fmap f x = ??

The type signature that we’re demanding of our fmap is an interesting one: (a -> b) -> (a -> Bool) -> (b -> Bool). Notice that such a function doesn’t have any a to work with. One might not exist: the Functor needs to be valid over empty types (e.g. Void; note that [Void] is not empty but has exactly one element, []). In typical Functor cases, the a = Void case is handled soundly. For example, consider the list case: ([] :: [Void] maps to [] :: [b] for any b), and any non-empty list has as to work with. In this case, fmap‘s type signature gives us two things that we can do with a‘s but no a. This means that we have to ignore the first two arguments; we can’t use them.

instance Functor (<-) Bool where

-- fmap :: (a -> b) -> f a -> f b
-- in this case, (a -> b) -> (a -> Bool) -> (b -> Bool)
fmap _ _ = (?? :: b -> Bool)

This rules out any useful Functors. In fact, our need for an expression that inhabits b -> Bool for any b limits us to two non-pathological possibilities: the constant functions! Since we don’t have any b to work with, either, we have to commit to one of those. Without loss of generality, I’m going to use the definition fmap _ _ = const True. With a few tweaks to what I’ve done, Haskell will actually allow such a “Functor” instance to be created. But will it be right? It turns out that it won’t be. To show this, we consult the laws for the Functor type class. In fact, we only need the simplest one (Identity): fmap id === id to refute this one. That law is violated by the constant “Functor” above:

fmap id (const True) = const True -- OK!
fmap id (const False) = const True -- whoops!

So it turns out that (<-) Bool (my notation) has no Functor instances at all (the only properly polymorphic candidate fails the type class’s most basic law) and the more complicated cases fail similarly.

Sorry Quora, but you just did what guilty people do.

I’ve obviously been paying a lot of attention (and “lurking”, although my interest in posting there is pretty much gone for good) to Quora– the site that banned me because I jokingly challenged investor Paul Graham to a rap duel– and the mess that followed. I may be a controversial figure for my 2012 exposure of stack-ranking at Google, or for many of the lies I’ve exposed inside the startup game, or maybe just because some people hate static typing, but I was a model contributor to Quora by any definition. I don’t have rally people. Quora’s users and employees are already pissed-off on my behalf. If anything, I’m trying to hold them back.

What has disappointed me about Quora’s conduct is not the ban itself. The dominant theory seems to be that it was subjected to undue pressure by investors and had no choice. However, this spot-on post by “James Crann” (a declared pseudonym) was, unfortunately, yanked from the site this morning. I’m glad that I was able to catch it. It was gone before 9:00 am. The post was inoffensive and reasonable. It did not even assert that one explanation of “QuoraGate” was the correct one. There was no reason to remove it from the site, unless something needed to be covered up.

Forgive any errors, as I’ve had to hand-type this text out from a screenshot.

Why was Michael O. Church banned from Quora?

James Crann (ed. note: this was a declared pseudonym).

I have no idea, but it’s almost certainly not the official explanation.

One possibility is the “investor-level extortion” theory that Michael has put forward. See his blog post: What the September 4, 2015 Quora disaster (#QuoraGate) tells us about VC-funded tech’s future. Another is that someone at Quora felt that Michael was getting “too big” and had to be taken down. Many forums ban posters who seem to be “breaking away” with a powerful following, and Michael is one of the most-followed non-celebrity posters. Or it could have been some other private grudge. Marc Bodnick and Michael Church always seemed to respect each other, but they had very different political views. It does seem odd that a site would take such action against a popular user, and there are a number of possible answers. I’m not as quick as Michael is to jump to a specific one of them, because I saw the Y Combinator feud as entertainment more than as a threat to anything. Honestly, I thought it possible that the YCs were in on the joke and using it for free publicity.

Quora could be having strings pulled on it, or it could be covering up for an overzealous admin who just triggered a land mind. To me, stupidity is as feasible an explanation for this as an investor extortion.

As I mentioned in a comment on Ryan Chew’s answer, there are several reasons why the “sock puppet” explanation is almost certainly untrue. For one thing, Michael Church has 8,590 followers and anything he posts gets at least 10 legitimate upvotes, sometimes hundreds. I have a hard time believing that Michael Church has these scores of sock puppets, all with rich histories and many tied to real people, and that his success on Quora isn’t due entirely to the quality of his answers.

Second, sock puppeting wouldn’t be very effective on Quora because it has a PageRank-like system wherein the socks’ votes would (rightfully) be assigned a low level of credibility. (ed. note: I believe this to be correct, though I have no inside knowledge). For answer placement, who is doing the voting matters more than the raw number of up- or downvotes (and that makes sense).

I also don’t buy that Michael [has used] sock puppets to troll. He doesn’t seem to need the cover of anonymity to voice controversial opinions. He wants what he is saying tied to his real name. And while I’ve only met him a few times (ed. note: I don’t know who this person is, and I’m not going to share my guesses) he doesn’t seem to have a lot of free time and I can’t imagine that he has the time or interest necessary to run a sock puppet army.

Finally, it’s inconceivable that Quora would violate user privacy just because it suspects someone who doesn’t need sock puppets has an alternate account. Quora may suspect that Michael has more than one account (because it’s the Internet, and most people do) but they’re obviously using sock puppetry as a “We know what’s best for you” evasive answer. Something else is going on. It could be a VC putting pressure on Quora, or it could be an incompetent admin.

Disclaimer: I am not using my real name, because I fear a ban on my real-name account for speaking the truth about this.

Here are the photos that establish that this post (now not merely “collapsed” but actually covered up) did exist.

Screen Shot 2015-09-07 at 5.46.55 AM

Screen Shot 2015-09-07 at 5.47.18 AM

That post was removed around 7:00 am Pacific time. If there is a silver lining to all this, it’s that some poor bastard had the job of doing a corporation’s cover-up work at 7 am on a holiday. I would bet that he is a lot more upset with Quora than I am.

To my supporters at Quora

A dominant topic among employees of Quora, over this weekend, has been whether they’re going to stay with the company in light of its decision on September 4 to ban my account. Quora’s management seems to be concerned about the threat of attrition in the wake of this. So, I feel compelled to comment.

I’ve been in technology for 10 years, so I’m going to say a few things. First, if you’re seriously considering leaving your job on my behalf, you probably shouldn’t move based on that, alone. A user ban against an unobjectionable and popular user is a problem, and bad-faith use of administrative privileges is a big deal, but you have to look out for your own career. Don’t do anything rash. This isn’t as big of a deal as it seems, right now.

Obviously, it goes without saying that you shouldn’t leave one job without lining up the next one. If you’re presently employed, your offers seem to be about 20 percent better than what you get if you’re unemployed. There’s also the job hopper stigma to worry about: one short job is acceptable, but two or three in a row starts to hurt you. You probably also shouldn’t tell your boss, if you choose to leave Quora, that you’re doing it because of “9/4”. He’s not going to want to hear it. When a company faces a sudden morale crisis of this magnitude, the last thing that a manager wants is a “you too?”

Personally, I appreciate the support. However, I’ve also “been there” and the tech world can be extremely vindictive. No matter how incensed you are about Quora’s decision to ban me, leave on good terms if you choose to leave, and strongly consider not leaving Quora if it’s just over this. Just trust me on this one, okay?

Now, opinions in general seem to be split down the middle on why my account was banned. Half of the people I talk to seem to believe the investor-level extortion hypothesis, and the other half seem to think it’s something more mundane, like an admin settling a score or just a bureaucratic mistake. One person implicated a specific Quora administrator who intends on applying to Y Combinator in the next cycle. (No one believes the “official” explanation involving sock puppets.)

If it turns out that the decision to ban me came from inside Quora, then this would make the company itself ethically suspect and, by all means, leave the company if that is the case. It’s too early to make such a claim, however.

On the other hand, if Quora was subjected to investor-level threats, then I implore you to understand, at the very least, that Quora had no choice in the matter. Investors’ threats are a big deal and it’s far, far better for one user to be banned from a website than for 115 peoples’ jobs to be put at risk. It could be that “9/4” was the less horrible of two options. We still have to wait and see, but don’t do anything rash on my behalf. I’m doing just fine.

Thank you all for your continuing support.

What the September 4, 2015 Quora disaster (#QuoraGate) tells us about VC-funded tech’s future

For those who don’t know the back story: under some kind of investor-level pressure, I was banned from Quora on September 4, 2015. No valid reason was given. I was a “Top Writer” and Quora frequently published my answers to venues including Fortune, Time, BBC, Forbes, and The Huffington Post. Some people (including me) believe that it’s connected to the feud that Dan Gackle and Paul Buchheit started with me last month, involving direct pressure either from Y Combinator (an investor in Quora) or people purporting to act on Y Combinator’s behalf.

I’m really sick of tech beefs, and I assume that my readers are, too. I’m not going to talk about the feud. These things are just so fucking stupid, I can’t stand to be in them anymore.

I’m going to talk about what Quora’s recent action means for everyone else, because this isn’t about just me.

Of course, I could still be wrong in my model of what happened. If Quora reverses the ban and attributes it to an embarrassing bureaucratic or technical error, I’ll accept that resolution, and consider it likely that they are being truthful in the assertion that it was a glitch. It isn’t too late to undo the harm. I don’t fault Quora, at least not until I know more, for its role in this.

So, everything I’m going to write assumes that my sources of information are correct. We’ll know which explanation is right by Tuesday or Wednesday– I don’t expect anything to change over the weekend– based on whether or not Quora un-bans my account. Given the morale problem that my banning has triggered internally at Quora– I’ve gotten to know several Quorans personally– it seems reasonable that Quora will reverse the ban in any scenario except for an existential threat to the company in the form of investor-level extortion.

I find no fault in Quora, at least not now. When faced with the decision between banning one user and having to fire 115 people because of a fundraising problem, the choice is obviously the former. So I have nothing negative to say about Quora. This is about the venture capitalists who chose to extort. Why did they sink so low?

I’ve been in technology for long enough to know that there are some sleazy people in it, but I’ve never seen sleaze hit the product level. For example, many technology companies stack rank their employees, and investors screwing founders is an old a game as empire, but tech companies don’t often use intentional, bad-faith product failures to punish users. For example, Google and Facebook don’t allow their employees to stalk their ex-significant-others, and will fire someone who tries to do so. That was the world we were used to: one where products weren’t intentionally let fail to punish users, because companies and investors knew that users’ faith in the product was more valuable than settling some silly feud.

Here’s what September 4, 2015 means (or, at least, seems to mean). It means that VCs will threaten hundreds of jobs to settle a minor score in a feud that the other side (to be honest) wasn’t even taking entirely seriously. (I found it hard to believe that “Don’t Be Evil” Paul Buchheit would condescend to feud with me, so it was more of a fun joke, that I wanted to see if I could keep going, than a serious beef.) It means that, due to the importance of social proof and “signaling” in Silicon Valley, a single influential investor can pressure a company into bad-faith uses of its product. If you piss anyone off, no matter how silly your slight is, you can’t trust anything that the Valley has built. If you challenge an overfed three-digit millionaire to a stupid rap battle, you might face a defamatory user ban on an unrelated Q&A forum.

What happened last Friday, itself, doesn’t matter. I’ll find other uses of my time. Far more interesting is what this says about Silicon Valley and the fundamental brittleness of everything that it has built. If investors are willing to forcibly compromise the ethical integrity of a billion-dollar company over a goofy feud, then what happens if the stakes are higher?

Brittany Smith

It’s November 8, 2021, five months to the day after 25-year-old Brittany Smith, a former associate at a venture capital firm, was awarded $6.3 million at the end of a lawsuit against her billionaire ex-boss, Tom Smyrr, for sexual harassment. It was an obvious, open-and-shut case, even with Mr. Smyrr’s expensive legal team. In February 2020, on a trip to New York– a one-party-consent state for audio recording of conversation– he threatened to fire her, and ruin her reputation within Silicon Valley, if she didn’t perform a sexual favor. She said “no”– and her phone, recording everything, saved her career. She lost her job, and couldn’t get another one, because Tom Smyrr had slandered her throughout the industry, so severely that even her ex-boss’s enemies wouldn’t hire her. The past summer’s victory in the courtroom was the first step toward clearing her name, and it seems to have worked. Today she had her first interview! It went well, she thinks.

It’s 6:45 pm. It’s dark, rainy, and cold outside. Brittany’s exhausted from the interview and wants to get back to her hotel as quickly as she can. Not knowing where to find a cab in this strange city, she uses the new ride-hailing app, Vyper. Three minutes later, the driver arrives: an unsmiling man, with dark shades. She asks herself: is it even legal to wear sunglasses and drive at night? Eh, whatever, she thinks. She just wants to get to the hotel and go to sleep.

It’s 7:09. Brittany notices that she’s crossing a bridge, and it’s one she hasn’t seen before. Where am I? She checks her location on Loqate Maps. For twenty minutes, the driver’s been going the wrong way! Her heart starts pounding. This isn’t right“Excuse me, sir,” she says. “This isn’t the way to my house.” Must be an honest mistake, she thinks. Or is he going to rob me? she wonders. “I’m sorry,” the man says. “It’s that damn Loqate bug. It’s sending me on a bad route. I’ll get you out of here.”

The Loqate bugThat was fixed 15 months ago! Why is this man running outdated software? No, she says to herself, don’t judge. Not everyone keeps current with software updates. “I just started driving for Vyper this week,” he says. She calms down a little. He’s an older man, with a gentle and intelligent aura about him. Ten minutes of conversation with the man leaves her feeling relaxed. Okay, not a robber, not a creep, just a new driver. She’s absolutely miffed about a 15-minute ride taking half an hour (and counting) so the driver assures her that her ride will be uncharged.

It’s 7:26. Brittany looks out the window. She’s in a deserted, industrial, unattractive part of town, with dilapidated warehouses on both sides. This guy is terrible at route planning. Whatever, she thinks, he said he’s not charging me so there’s no meter to worry about.

She hears a “ping!” on her phone; her boyfriend shared a news article. She reads it. She’s too tired to find it funny and quickly finds herself (almost by force of habit) thumbing through her backlog of TechPress posts. July… not much worth reading. August… nothing. September… same-old stuff. October…

October 3, 2021: Billionaire Tom Smyrr invests $320 million in Vyper at $1.1 billion valuation.

I’m in a Vyper, Brittany realizes. Unease rises. That pressure behind her head, that throbbing in her neck, that sudden full-body sweating… are all new sensations to her. The driver is a hit man! Her eyes hit the speedometer. 18 mph. Seatbelt off, she pulls the door handle. Locked. Ok, that could be a safety measure. Most cars auto-lock at 10 mph, she reminds herself. Maybe this is what a panic attack feels like. She’s shaking, crying, banging on the door. Or maybe he’s going to kill me! Full-on panic. She screams, “Stop the car! Stop the fucking car! Now!

(To be continued?)

We are in new, weird, scary territory. I don’t like it.

Nonzero shit-fan interaction coefficient

I’ve been involved in a few high-profile tech feuds, and the not-surprising conclusion that I’m coming to is that they’re a waste of time.

Dan Gackle and Paul Buchheit, both associated with Y Combinator, chose to start a beef with me last month. Dan G., moderator of Hacker News, banned my account from Hacker News while intentionally taking one of my comments way out of context, then deleting that comment in a bad-faith attempt to represent his out-of-context interpretation as “official”. Paul Buchheit continued the feud by lobbing defamatory accusations at me on Quora. I don’t like to start fights, but I’ll end them on my own terms.

Under pressure that seems to have come directly from Y Combinator, Quora banned my account shortly before 1:00 pm. I was a model contributor, a three-year Top Writer, and never given any warning about conduct on the site or even the slightest inclination that I was in anything other than good standing. The ban came out of the blue. Jeff Meyerson and I worked together on the February 1, 2015 Quoracast, and Alecia Li Morgan worked with me on publishing several of my answers, and they were both great people to collaborate with.

To make it clear, I don’t harbor any ill will toward Quora and I certainly don’t hold any toward its employees, many of whom I’ve worked with in the past, and who seem to be exemplary citizens. Quora participated in a Y Combinator round (and probably regrets it now, since the company seems to have lost critical autonomy) and is thus, to some degree, connected to a rat’s nest of bad intentions that it can’t possibly control. I don’t fault Quora or anyone there for it. I assume good faith in the company itself and its people, and believe the incoming information indicating that the extortion laid upon them by external forces was so extreme as to leave them no other option.

The story coming to me (from multiple sources, as of this morning) is:

  • a source inside Quora has given me that some Quora employees are aware of the ban, and disagree (some strongly) with the decision. The consensus among Quorans (even including management) who know the situation is unanimous that I shouldn’t have been banned. I thank them for their continuing support. There are many Quora employees of whom I think very highly, and I don’t hold this against them in any way.
  • all evidence indicates that Quora was pressured to ban me by people associated with (and possibly part of) Y Combinator, retaliating because an anonymous contributor to Quora leaked the fact that Paul Graham’s animus toward me is largely based on this December 2013 blog post. Y Combinator seems to be acting under the assumption that the “leaker” is me, which can’t possibly be valid, because whoever did leak that fact clearly knows Paul Graham personally, and I don’t.

There is, I must note, a small chance that I am wrong. I don’t expect much to change over the Labor Day weekend but if, by Tuesday or Wednesday, my Quora account is un-banned, we’ll be able to chalk this up to an embarrassing technical or bureaucratic mistake and forget about the whole thing. If my account remains banned, the only sensible explanation will be the one that confirms the rumors of external pressure placed on Quora. An abuse of power (and, quite probably, outright extortion) from some (presumably investor-level) entity with power over Quora will be, literally, the only the only possible explanation (having ruled out the “bureaucratic mistake” explanation). This won’t prove that Y Combinator itself is responsible, but it will strongly suggest such a claim, especially in light of this enormously stupid tech beef they’ve decided to have with me.

As for Paul Buchheit and Dan Gackle, you guys need to man the fuck up, apologize for your atrocious behavior, and let us end this stupid feud. It’s obnoxious, and it’s a waste of my time. I’m sick of your shit. Thanks in advance.

Engineers as clerks? How programmers failed to get the status they deserve.

One of the most interesting essays I’ve read recently is “I Would Prefer Not To”: The Origins of the White Collar Worker, which describes the institution and existence of the 19th-century office clerk. Bartleby, the passively resistant clerk whose “I would prefer not to” leads to ruin and failure– in contrast to the prototypical clerk who performs the work eagerly, invested in the belief in graduation to a better job– has become one of the most famous American literary characters of all time. It wasn’t a great life to be a clerk: work spaces were cramped and hot (as now in technology, but without air conditioning) and the work was extremely dull. However, for many wishing to enter the American bourgeois, it provided social mobility due to its proximity to wealthier people: the emerging class of industrialists, bureaucrats and traders who’d come to be known as businesspeople.

I’m going to clip two paragraphs from this excellent essay, in order to explain certain historical forces that still dominate American business. Emphasis is mine:

Tailer’s worries over his position were common in a clerking world where the distance between junior clerk and partner was seen as both enormous and easily surmountable. No other profession was so status conscious and anxiety-driven and yet also so straightforward seeming. No matter how dull their work might be at any given moment, there was little doubt that clerks saw themselves, and were seen by their bosses, as apprentice managers—businessmen in training. Few people thought they would languish as clerks, in the way that it became proverbial to imagine people spending their lives in a cubicle, or how for decades becoming a secretary was the highest position a woman office worker could aspire to. Part of the prestige of clerking lay in the vagueness of the job description. The nature of the dry goods business meant that clerks often spent time in the stores where their goods were sold, acting as salesmen and having to be personable to customers. In other words, the duties of clerks were vast enough to allow them to be tasked with anything, which meant that so much of their work depended upon so many unmeasurable factors besides a clerk’s productivity: his attitude, good manners, even his suitability as a future husband for the boss’s daughter. A good clerk besieged his bosses’ emotions the way he did customers—flattering them to the point of obsequiousness, until the bosses were assured that they had a good man on their hands. These personal abilities were part of the skill set of a clerk—something we know today as office politics—and though they couldn’t be notched on a résumé, they were the secret of the supposed illustriousness of business life. The work might dehumanize you, but whatever part of you that remained human was your key to moving up in the job.

This was also the reason clerks felt superior to manual laborers. Young men entering a factory job had no illusions about running the factory, which is why a few of them began to join the nascent American labor movement. But clerks were different from people who “worked with their hands,” and they knew it—a consciousness that Tailer registers when he declares the “awkward and clumsy work” of a porter unworthy of him. Young men who wanted to get into business knew they had to clerk, and they also knew that clerks could and often did eventually become partners in their firms. “Time alone will suffice to place him in the same situation as those his illustrious predecessors now hold!” Tailer wrote in one entry, loftily referring to himself in the third person. But though patience was the signal virtue of clerking—to write on, as Bartleby did, “silently, palely, mechanically”—impatience was its most signal marker. From the shop floor, the top of the Pittsburgh steel mill looked far off indeed. But in the six-person office, it was right next to you, in the demystified person of the fat and mutton-chopped figure asleep at the rolltop desk, ringed with faint wisps of cigar smoke.

Clerking degraded over time, as companies became larger and business became more oligarchical, meaning that the probability of advancement into the sort of role that one had actually joined the company for, over time, declined for a clerk as the number of people willing to be clerks rose faster than the number of wealthy business people the economy could support. Clerking, of the traditional variety, also became skippable for people of upper-class descent, thus losing its prestige. In the beginning of the 19th century, virtually everyone who wanted to become a businessman (and, in that time, it was mostly men) went through a clerking phase, but in the late Gilded Age, the scions of robber barons didn’t have to clerk in order to become full-fledged businessmen, which led the more ambitious and intelligent of the era to determine, increasingly, that clerking was dishonorable because, clearly, some people were “good enough” to skip that phase. (Doesn’t this sound like the “everyone who’s worth a damn starts a company (trigger warning: link contains syphilitic idiocy from Paul Graham)” attitude in Silicon Valley today? Yes, much that seems new is the old, repeated.) Then business schools were formed, the best of these able to skip someone over the clerking phase into The Business proper, typically when that person was under 30. Clerking lost its former prestige and, with the declining odds of progress to the business ranks, the outcome that made the passage worth it, seemed to die out.

Did it, really, though? Obviously, the job title of “clerk” doesn’t exist in any way that has the 19th century meaning, but I would say that clerking, culturally speaking, is still alive. Look at a typical Fortune 500 corporation. Most of the people in the business are (like a 19th-century clerk) in mostly-evaluative positions designed to lead into The Business, but they’re not called clerks, because there’s more specialization to that phase of the business career than there was in 1853. They’re accountants or executive assistants or marketing staff, with more prestige than “non-career” workers but strictly less prestige than executives (unlike physicians or professors, who live on a completely different scale). The assumption (not to call it valid) in each of these departments is that an X who’s any good is going to become a manager of X’s within 4-5 years and an executive (i.e. paid and treated like an actual business person) within 8 to 10 years. You might start as a process engineer or a salesperson, but if you don’t become a part of The Business proper within a certain amount of time, you’ve been marked as “non-career”. You weren’t invited to join the company; you just worked there.

Clerking evolved from an apprenticeship to a tournament at which most would fail, and the post-1870-ish specialization of the clerkship phase meant like must compete against like, as is true even today, for the limited supply of positions in The Business proper. Accountants competed with other accountants for the limited leadership positions, and marketing analysts went against other marketing analysts, and so on for each field. There was one group that realized, very quickly, that they were getting screwed by this: lawyers. Law is, of all the specialties that a business requires, perhaps the most cognitively demanding one that existed, at least with substantial numbers, in the late 19th century. It tended (like computer programming, today) to draw in the hyper-cerebral types insistent on tackling the big intellectual challenges. Or, to put it more bluntly, they were a very smart pool and it was tough for a lawyer to distinguish himself with a level of intelligence that would be dominant had time and opportunity brought him into a different pool, but might only be average among attorneys.

Lawyers realized that they were getting burned by “like competes against like” in the tournament for the opportunity to become actual partners in the business (i.e. executives and owners). The good news, for them as a tribe, was that they knew how to control and work the law, that being their job. They professionalized. They formed the American Bar Association (in the U.S.) and made it standard that, in corporate bureaucracies, lawyers report only into other attorneys. In-house corporate attorneys report to other attorneys, up to the General Counsel, who reports to the board (not the CEO). Law firms cannot be owned by non-lawyers. Accreditation also ensures (at least, in theory) a basic credibility for all members of the profession: one who loses a client is still a lawyer. The core concept here is that, while an attorney is a provider of services, attorneys are not supposed to be business subordinates (clerks). They have the right and obligation to follow ethical mandates that supersede managerial authority. This requires that the profession back them if they lose their jobs; the value of a limited-supply accreditation is that a person who is fired for exercising that (mandatory) ethical independence remains marketable and (in theory, at least) can resume his or her career, more or less, uninterrupted. Without that type of assurance, that level of ethical and professional independence is quite obviously impossible.

It may have made sense for most people in the corporate sphere to accept the (increasingly small) chance of ascent into The Business as enough of a reward to offset the negatives of being business subordinates. Perhaps only 5 to 10 percent of people in most departments were “Business quality”– I don’t know the answer to that– but lawyers knew that a higher percentage of them were, and that they wouldn’t get a fair shake in a like-competes-against-like system. In essence, they seceded from the existing, inadequate career track and formed a labor cartel (which is what professionalization is) in order to have a somewhat better one. They began to gain independent power and prestige, and managed to create their own firms in which terms for attorneys were (at least, for a time) better than what the standard corporate track offered them.

Clerks in Silicon Valley

The engineer-driven maker’s culture of Silicon Valley is dead. VC-funded startups are, in most ways, large corporations. They’re young businesses but they aren’t small, and their notion of agility is misleading. They’re more like missiles, designed to hit one target or explode “harmlessly” (to investors) than they are like fighter jets. Since VCs encourage founders only to take one kind of risk (two sources of risk is considered to be too many) and developing a single new product is that supposed to be that source of risk, the result is that most of these companies have very traditional organizational structures. “Flat” organization is often a signifier of an undocumented and unstable hierarchy than of hierarchy’s constitutional nonexistence. It often doesn’t mean open allocation so much as an environment of undefined leadership in which half the people think they are the de facto leaders, and in which you have to watch your back for randos trying to manage you. The result of this is that most VC-funded startups feel very much like Fortune 500 companies, except with shitty open-plan offices and different perks (better breakfast, worse health benefits).

The clerkship model has lived on in most of business. Accountants aspire to be CFOs, HR people aspire to be VPs of HR or COOs, and it’s a reasonable assumption that everyone in almost every department who’s any good is planning to become part of Business proper (not necessarily in that company, with some going to other firms or starting their own) some day. The result of this is that the darker side of the clerkship model– total subordination to The Business– is tolerable. In-house accountants and marketing analysts (unlike software engineers) don’t object to the idea that they should subordinate to business people, because they expect to be business people within a decade. A ladder (if not the internal ladder, an external ladder elsewhere) will be extended to the smart ones, and the not-smart ones aren’t thinking far enough ahead to object.

That same assumption, while held over software engineers by executives, doesn’t actually hold up. It’s not reasonable to assume that every smart programmer will be invited to join The Business after ten years because there aren’t enough spots. Moreover, as with law, a programmer barely knows the field at four years of experience. If everyone who was any good at software became a manager after four years, and an executive after eight, then there’d be no one left in the field to write high-quality software. (This might explain why there is so little high-quality software out there.) The world of computing actually needs for people who are genuinely talented and smart not to move into management after five years, because someone has to write code that isn’t horrendous.

Executives don’t know what to make of this. What is wrong with these people, that most of them don’t want to become executives themselves? Do we have some bizarre constitutional antipathy toward the concept of making money? (Of course not.) The truth of the matter is that engineers don’t not want to become executives. We like money and prestige and status as much as anyone else. We just know the odds, and we won’t work as hard as someone less talented will for a corporate lottery ticket. Like competes against like, and we’re in the smartest and toughest pool by far in most organizations. In most fields of business, an IQ of (say) 130 would make one a pre-eminent protégé from the outset, and a 140 would stand out enough to have executives (assuming they could recognize intelligence at that level; some can and some can’t) tripping over each other to be a person’s mentor. In software, a 130 might still be somewhat above average, but it’s not special, because we’re just a more competitive pool. If you walked into the Goldman Sachs analyst program with a 130+ IQ and not-horrible social skills, you’d stand out enough that in your first week, your MD would be asking you whether you wanted your work assignments over the next year to lead to direct promotion or Harvard Business School or buy-side placement, and plan your next 12 months accordingly. (You’d still have to deal with punishing hours; no one escapes that.) In programming? 130 is enough to handle almost all of the work, but it doesn’t make you a stand-out.

Programmers, to tell the truth, are a bad group to land with, professionally speaking, for two reasons. At raw intelligence, we’re the highest sub-discipline of business that there is (excluding, perhaps, computer hardware engineers and research scientists such as biochemists in drug companies). We’re smart, which means that people would be pre-eminent protégés and direct promotes anywhere else are just average to average-plus among us. Even in the 140s and 150s and up, we’re reminded daily of flaws in our logic by this nasty-tempered subordinate called the computer that “would prefer not to” correct our spelling errors (and that’s actually a good thing). On the other hand, in social skills, we’re one of the worst. We don’t look out for each other or protect the group, and we don’t have the organizational skills to operate collectively or even guard ourselves against divide-and-conquer tactics (e.g. stack ranking and Scrum) from The Business. In peer-based performance reviews, actual business people are wise enough not to sell each other out without a reason, and give each other glowing reviews. We give “honest” reviews, because we’re a bunch of social imbeciles who don’t realize that a collective willingness to sell each other to management while getting nothing in return is devastating to us (all of us, even the best performers) as a group. So, when you’re a programmer, the skills that are competitive in the field (i.e. it can hurt you to have superior colleagues) are amped to 11 while the skills that are cooperative (i.e. having superior colleagues is to your benefit, because you back each other) are thin on the ground. It’s our lack of organizational ability and collective self-respect that keeps our status low. Some think that in my writings on programming in the business world that I’m railing against “evil businessmen” and I’m not. As people, they aren’t any more evil or greedy than anyone else. Our failure to achieve the status we deserve, as technologists, is on us. If we don’t demand higher status, we won’t get it. Business people aren’t evil, but they didn’t get where they are by being generous, either.

Business people do note our disinterest, relative to other professional specialties, in working on the projects that the business values most. This is an artifact of our accurate appraisal of our odds of rising to a level where we actually care about a specific company’s performance. Let’s say that 5 percent of white-collar people actually get into “Exec” or “The Business” after 8 years. That’s a 95 percent chance of wasting eight years of one’s life doing grunt work on a promise that was never delivered. The potential rewards are considerable, and we (as programmers and technologists) like money as much as anyone else, but we know that the odds are poor. That small chance of being promoted into “The Business”, which we see as a bureaucratic machine that mostly prevents people from getting work done, isn’t enough to motivate us. So, we’ll favor the project that enriches our future career prospects (and, if we recognize a ceiling where we are, our ambitious become external) over the one that benefits The Business.

Programmers have another trait that confuses executives, which is that we don’t see highly detailed work as dishonorable grunt work that one wishes to graduate out of, as soon as possible. In fact, relative to executives, there’s a complete inversion in the relation between detail-orientation and prestige. Everywhere else in business, work that is hazily defined and evaluated subjectively (i.e. it’s good work if people like you, regardless of whether it’s right) is the most prestigious, because there is the least risk in it. Executives only have to worry about being liked by other executives; workers have to be liked and get the work done right, which makes the latter position riskier and less prestigious because the presumption is that anyone with social skills and drive will get into something less exposed to fluctuations in one’s own performance (and to random human error). Programmers, on the other hand, have created their own bizarre culture where intellectually demanding and detailed work is the most prestigious. “Low level” programming sounds terrible to an executive, and “back-end” sounds like “back office” to management types, but most of the best programmers gravitate toward that kind of work, and away from the business-facing “front end”, because we find requests like “make the button a different shade of blue” to be demeaning and intellectually fruitless, whereas we find the highly detailed and cerebrally taxing challenges of “low level” computing to be much more fulfilling. Business people realize that companies are extremely complex and that making “the big decisions” requires having an army of trusted people who can digest the complexity; to them, leadership involves stepping away from the details. Programmers, on the other hand, want to zero in on the precise, careful stuff that bores most executives. One might think that this orthogonality of interests could create a symbiotic pairing of equals between businessmen and engineers, but it rarely happens that way, because the former don’t want to see the detail-seeking weirdos in the latter category as their social equals. The executives have the power to start, and they keep it, and as a result the high-power minds in most organizations are also the most disengaged ones.

As the business sees us, we’re still clerks, and that’s a raw deal, because the number of leadership positions is small while the number of us who are intellectually capable of ascent into The Business is much higher (like, over 80 percent as opposed to the less than 10 percent who’ll actually get there) than in any other sub-field. This is exactly the problem that attorneys (also a high-IQ sort) faced: the clerking game, with like competing against like, hurt them, because they’d always have a surplus of strong people who couldn’t be given (and who would not have wanted, were there other high-power career options) management roles. They realized (as we ought to) that they were too valuable and powerful to accept the “5-10 percent of you will be selected, after an evaluative period lasting several years, for ascent into The Business; the rest of you will be viewed as leftovers and excreted over time” deal that everyone else got.

Unlike lawyers, we haven’t succeeded in creating the labor cartel that would be necessary if we wanted businesses to pay us what we’re worth in cash instead of empty promises. We have used the complete inability of those who pay us to evaluate our work to create an “art for art’s sake” culture wherein credibility among skilled engineers matters more than traditional corporate credibility. That has made the job more fun, but it hasn’t increased our pay or relative status. Also, our “art for art’s sake” culture has given us the rather negative reputation of being averse to working for The Business. That’s not even accurate! We don’t like to work for The Business as a subordinate. We know the odds on that clerk game, and they aren’t good. If The Business were willing to meet us as equals, we could work together and the orthogonality of our affinities (our attraction to detailed and difficult work, their attraction to subjective and holistic work) could be mutually beneficial; but they’re not willing to do so.

The clerk system also doesn’t work for engineers because of the massive talent inversion. Just as the lowest officer outranks the highest enlisted man, the lowest executive outranks the highest non-executive in most companies. In other words, the top programmers are still lower than the lowest executives, including those who ascended along far less competitive, non-engineering ladders. For example, at Google it is genuinely difficult to become a Director-equivalent Principal Engineer, because there are only a handful of those, whereas it’s hilariously easy (i.e., unless you fuck up severely, you’ll get it inside of a few years) to reach the Director level on the management track. It doesn’t make sense. Noting the comical talent inversion that comes with the concept of the programmer as a business subordinate, we have a hard time respecting the official order that the company tries to put out as the consensus on the value of each person. We know that it’s a thousand times harder to become an executive-equivalent “individual contributor” engineer than to become an executive, so treating us like plebeians with aggressive relative down-titling is going to leave us cold.

Unfortunately, we haven’t got the organizational skills to come up with anything appreciably different from the archaic clerking system that originally justified total subordination to The Business. It’s what we work under. There are plenty of good software engineers who don’t move up into the executive ranks (and, for all I know, that could also be true in other departments) but business executives assume that there aren’t, and consequently, “engineer” means “leftovers”. It means low autonomy (Scrum!) and equity slices well below one-tenth of what an equivalent business person would earn. We’re wise enough to that culture to apply proper cynicism in Fortune 500 companies. The work that is beneficial to our career objectives (which may or may not involve climbing that particular company’s ladder) we do well– and if we will be able to commit the work to open-source and gain an external credibility, we’ll do it very well– and the rest of it we “prefer not to”, and we can often get away with it because such an incoherent clamor of requirements comes at us that it’s impossible to do everything, and because it’s basically impossible for someone who isn’t one of us to evaluate our work for how difficult it is or how long it should take except on some inaccurate emotional basis that, if actually enforced, will just result in good people getting fired and morale going up in smoke. “Product managers” and the like can yell at us, but they really have no idea what we’re up to. This isn’t ideal for us, nor for our employers, and it leads many companies into a culture of prevailing mediocrity (at least, in engineering) as relations between engineering and The Business decline. The point, though, is that we have a lot of latent power that we have no idea how to use. We haven’t figured out how to assert ourselves and get some sort of social equality. Or, perhaps, we prefer not to.

Ambitious software engineers don’t like this arrangement. We don’t want to give middling efforts to behemoth companies that couldn’t give a shit about us. This has traditionally led us in the direction of entrepreneurship, and companies have had to create special positions to retain engineering talent. Corporate programmers are viewed as “the leftovers” not because all the good ones are plucked into The Business, but because (at least, in theory) the ones with any talent are supposed to start companies, become independent consultants, or move into pure research or architecture roles that distance us from the ugly parts of corporate software engineering like “make the thingy thing work this slightly different way” requests and pager duty. Contrary to stereotype, there are some excellent software engineers at Fortune 500 companies, but almost all of them find a way to a “research engineer” track before the age of 35, because churning through tickets from The Business is not a job that anyone wants (unless that person has a delusional belief that such work will lead to rapid ascent into The Business; but, honestly, your odds are better– still very low, but better– if you directly email the CEO and explicitly ask to be his protégé.) Most large companies allow mainstream engineering to turn into a ghetto while putting all of the technology organization’s surplus smart people (i.e. talented people who can’t or don’t want to become executives) into an R&D group. The problem with this approach is that, while R&D relies on mainstream engineering for implementation, a growing resentment between the small, protected R&D group and the gritty, underappreciated, Scrum-ticketed “mainstream eng” leads to diminishing clout for the former. The R&D engineers are highly paid and given nicer titles, but they aren’t listened to because, as far as the embittered “left behind” programmers in mainstream engineering are concerned, they don’t do any of “the real work”. The end result is that most of these R&D engineers end up spending time on “fun projects” that never go into production, and are eventually cycled out of the company.

What this tells us is that Fortune 500 companies can, at least in some cases, recognize top software talent and its value, contrary to stereotype. They don’t necessarily get it right, all of the time, at an individual level, and there will always be sharp people who remain stuck in mainstream engineering; but they do realize the need to have some A-level talent in-house, and they have the insight to know that if “work” is fending off a bukkake of Scrum tickets and user stories, A-level people will leave. That’s what an “innovation lab” or a COE (“center of excellence”) is for: to protect top talent. For the individual engineer, it’s unfortunately not terribly stable. There’s the perennial threat of a research cutback, in tough times, meaning that one is punted into mainstream engineering where the Scrumlords lurk. Usually, this happens when money is tight and management roles are being doled out (in lieu of compensation) to retain the decent programmers in mainstream engineering, which means that the “former R&D” engineers don’t even in land in management positions (all of those being taken by talented people in mainstream eng. that the company needs desperately to retain) or even on “green field”/new-code projects, but at the taint bottom, being asked “to help out” on legacy maintenance. Of course, if you want to turn a smart engineer into an ineffective idiot, forcing him to maintain idiotic legacy code (but without power, because the power must be doled out in lieu of compensation to key people already in mainstream engineering) is a very effective way of doing that.

Large companies don’t have a stable plan when it comes to top engineering talent. Labs and research divisions get cut so often that “research engineer” isn’t always the best long-term career path. It works if you live in that otherwise toxic cesspool called “The Bay Area”, because there are so many tech companies there, but it’s an erratic life for anyone else because research engineering positions are less common and it can require a geographic move to get one. The “software architect” track is more stable, but can be dangerously overconnected; because the work of the architects effects so many people, there are far too many meetings and horrible lines-and-boxes drawings and this forces the architect to delegate even the enjoyable parts of the job.

Corporate bureaucracies struggle with outliers in general, and intellectual outliers are a special breed of difficult, and intellectual outliers who aren’t eligible for rapid promotion into the executive ranks (because there aren’t enough spots, or because they prefer to write code and don’t want to be executives, or because they aren’t “a cultural fit” for the boardroom) are pretty much intractable. So… for the past fifteen years, a going assumption has been that such “intractable” high-talent people should, as if it were just that easy, just start companies and become founders. So, how has that played out? Poorly. Why? Because the VCs have proven themselves to be smart at a game that very few people understand.

The venture-funded ecosystem in Silicon Valley is the first postmodern corporate organization. It chooses not to be, legally and formally, one company; instead, it’s a federation of about twenty marquee venture capital firms and the few hundred technology corporations that live and die and are bought (i.e. the founders get paid performance bonuses arranged by their VCs and people who owe favors to the VCs and work in large companies, and the startups are assimilated into those large companies) at their whim. More feudal than traditionally bureaucratic, “Silicon Valley” doesn’t have a clear president, CEO, or leader. It’s a fleet of a few hundred well-connected investors who all know each other and make decisions as a group, like an executive suite, but who work for nominally competing firms. Its main contribution to the business world is the notion of the disposable company. The core innovation of the VC-funded iteration of Silicon Valley has nothing to do with technology itself, but with the understanding that “companies” are just pieces of paper, that can be thrown away at convenience.

Cutting a division in a corporation is hard, because the company wants to retain some of the talent that’s within that division, but that makes the controversy over the decision persist. If you cut the self-driving car project and make an AI researcher work on “user stories” and answer to a Scrumlord, you have a pissed-off, very intelligent (and, therefore, probably quite articulate) person under your roof who will make of himself a constant reminder that things used to be better. On the other hand, the clean cut (that is, firing the whole division, cutting generous severance checks, and moving on) is seen as too brutal by the masses and too expensive by HR to be justifiable. The disposable company is the solution to this problem. In a single large company, cutting a division leaves the rest of the company to question your judgment, while attuned people in other departments start to wonder what fate has in store for them. On the other hand, an executive suite (VCs) running a fleet of disposable companies can just stop funding one of them and it, because of the massive burn rate that it needed to take on to meet your demands, runs out of money and dies.

The VC-funded dynamic also allows for title inflation. Middling product managers, put in charge of their own disposable companies, can be called “CEO”, while the actual executives have the away-from-the-action-seeming title of “investor”. This allows the people with actual power and status to build up extremes of power distance that seem innocuous. In a large company, executives who deliberately ruined a middle manager’s professional reputation would be accused of harassment and bullying, sued, and possibly terminated for the publicity risk brought on the company. In the VC-funded world, a founder who runs afoul of investors is blacklisted for it, but without consequences for investors. The cynic in me suspects that the appeal of the “acqui-hire” system is that it allows a bunch of “good ol’ boys” to circumvent HR policies: you can’t not-hire someone over a protected status, but you can not-fund her.

More importantly from an engineer’s perspective, the dishonest presentation of the career structure of the VC-funded world enables a brash, young male quixotry that investors believe (not for good reasons) is the key to technical innovation. The myth is, “you could become a founder and march to your own beat“. The reality is that “founder” is just a middle management title in a job that very occasionally delivers a large performance bonus. The cleverness behind all of this is that it manages to reframe what business is, in such a way that engineers are left with a severe distaste for it. First of all, VC-funded companies have a hilariously high rate of failure: possibly 80 to 90 percent. This is presented as normal business risk, but it’s not; the actual 5-year survival rate of new businesses is around 50 to 60 percent (which isn’t very different from the 5-year survival rate of any new job, these days; I’d love to work in a position where there were even odds that it’d be worth it to keep coming into work 5 years later) and many of those “failures” performed acceptably but didn’t offset the opportunity cost for the proprietor. The VCs want founders and peasant engineers to believe that it’s the nature of business to implode in fiery wreckage because it enables them to take massive risks with other peoples’ careers. Worse yet, VC-funded companies have a severe correlation risk, as anyone who was in technology in 2001 can attest. The rate of total business failure (and, thus, job loss, often without severance because the money literally isn’t there) is low during comfortable times, but spikes. When it does, the peasant engineer loses a job at the same time as many other people are losing their jobs. Second of all, in order to make the founder job look “too difficult” for the typical engineering peasant, the fundraising process has been made into a six-month, soul-raping ordeal. No one would ever tolerate a six-month interview process where breaches of ethics and privacy (such as back-channel reference checking) are considered normal, for a regular middle management position. It’s the dressing-up of the position as being something more– an “entrepreneur” rather than a head of a disposable company, responsible for continually managing up into the investor class– that makes it salable.

This entire system obscures lines of sight and it solidifies power for the entrenched. The best way to hold power is to convince those who don’t have it that they don’t want it, and that’s what the VCs have done. They’ve made the middle-management job– being a “founder”– so intolerable that it appeals only to the egotistical, convincing the peasants that they don’t want to rise but should accept their state of subordination. What’s more, this ruse hides the “effort thermocline” (the point at which jobs become less accountable and easier with increasing compensation and social position) by placing it not within a company but between firms: the founders live at that painful top-of-the-bottom point just below the effort thermocline, and investors get the easier life, above it. The line-of-sight, from an engineer’s point of view, is that you have to change jobs twice to get into the executive suite: first, you become a founder and a hustler and a champion fundraiser with little time for intellectual or technical enrichment; second you become an “investor” which, again, is a completely different and not-exciting-sounding job. For the programmer, the visible (again, we’re talking about lines of sight rather than actualities) path to social status and power is so fraught with peril and malice and career-wrecking personal risk (since investors can black-list insubordinate founders, to a degree that would be enforceably illegal in any other setting) that the direct path seems not worth taking.

Yet Silicon Valley is driven by engineers who genuinely believe that they’ll become part of The Business! Otherwise, they wouldn’t throw down the 90-hour weeks. If it seems like I’m being inconsistent, here, that’s not the case. I’m ascribing an inconsistent attitude. See, software engineers in the VC-funded world are smart enough to know that the average-case outcomes are undesirable and that the direct path to power requires selling one’s soul. Where they are misled is that they’re brought to believe in indirect paths to power that don’t actually exist. The engineer works 90-hour weeks because his company “is different” and because the founders promised him “introductions” to investors that will supposedly enable him to bypass the hell that plebes are put through when they try to raise money. Most Americans, for an analogy, rate “politicians” quite lowly, and yet they tend to think highly of their politicians (which is why the same people keep getting elected). As they tend to see it: “Congress” is awful; their Senators and Representatives, however, are good guys who fight the system for them. There’s a similar dynamic in the young, white/Asian, upper-middle-class male quixotry that powers Silicon Valley. These software engineers are cynical and smart enough to realize that most “corporate executives” are worthless parasites, but they rate their executives highly. Like the degenerate misogynist who puts a woman on a pedestal as soon as she smiles at him, they put their own executives/founders on their good sides because those people treat them with basic, superficial decency (while negotiating them into employment contracts with 3-year non-solicits and 0.05% equity in a post-A company). Not a subtle bunch, software engineers tend not to realize that actual corporate sociopaths aren’t like the flamingly obvious “cartoon asshole” bosses in the movies.

There’s more that I could say about this. I’m at 6.4 kilowords and I don’t know how to end this essay, but end it I should, because it’s gotten long enough already. In essence, Silicon Valley has managed to confuse a certain set of inexperienced but talented software programmers into a permanent clerk status, without them realizing what’s going on. With lines of sight obscured by disposable companies and social distractions (“the cool kids” and “30 under 30” lists) and various other machinations, software engineers have been led into accepting an arrangement (corporate clerkship, requiring total subordination but offsetting it by a small chance of selection into The Business proper) that they rejected, the last time it was presented to them. Like everything else that happened in California in its golden age, Silicon Valley has been commoditized and made into a brand, and it has been leveraged brilliantly to make a powerful set of people (specifically, talented software engineers) ignore their own interests. Talented (if politically naive) young people, mostly software engineers, who wouldn’t be caught dead in the typical corporate arrangement (the clerkship system, which is still the management model for large companies’ technology organizations) will gladly throw down 90-hour work weeks in exchange for 0.01% of someone else’s company (“but the founders promised me investor contact, and I know that they’ll deliver because the CEO is so nice to me!”) In reality, if they’re going to work that hard, they should figure out how to organize around their own interests, and win.

Java is Magic: the Gathering (or Poker) and Haskell is Go (the game)

It may be apocryphal, but there’s a parable in the Go (for this essay, I will never refer to Google’s programming language, so I’m talking about the ancient board game) community in which a strong player boasts about his victory over a well-known professional player, considered one of the best in the world. He said, “last month I finally beat him– by two points!” His conversation partner, also a Go player, is unimpressed. She says, “I’ve also played him, and I beat him by one point.” Both acknowledge that her accomplishment is superior. The best victory is a victory with control, and control to a margin of one point is the best.

Poker, on the other hand, is a game in which ending a night $1 up is not worthy of mention, unless the bidding increment is measured in pennies. The noise in the game is much greater. The goal in Poker is to win a lot of money, not to come out slightly ahead. Go values an artful, subtle victory in which a decision made fifty moves back suffices to bring the one-point advantage that delivers a game. Poker encourages obliterating the opponents. Go is a philosophical debate where one side wins but both learn from the conversation. Poker is a game where the winner fairly, ethically, and legally picks the loser’s pocket.

Better yet, I could invoke Magic: the Gathering, which is an even better example for this difference in what kinds of victories are valued. Magic is a duel in which there are an enormous number of ways to humiliate your opponent: “burn decks” that enable you to do 20 points of damage (typically, a fatal sum) in one turn, “weenie decks” that overrun him with annoying creatures that prick him to death, land and hand destruction decks that deprive him of resources, and counterspell decks that put everything the opponent does at risk of failure. There are even “decking decks” that kill the opponent slowly by removing his cards from the game. (A rarely-triggered losing condition in Magic is that a player unable to draw a card, because his active deck or “library” has been exhausted, loses.) If you’re familiar with Magic, then think of Magic throughout this essay; otherwise, just understand that (like Poker) it’s a very competitive game that usually ends with one side getting obliterated.

If it sounds like I’m making an argument that Go is good or civilized and that Magic or Poker are barbaric or bad, then that’s not my intention because I don’t believe that comparison to make sense, nor am I implying that those games are bad. The fun of brutal games is that they humiliate the loser in a way that is (usually) fundamentally harmless. The winner gets to be boastful and flashy; the loser will probably forget about it, and certainly live to play again. Go is subtle and abstract and, to the uninitiated, impenetrable. Poker and Magic are direct and clear. Losing a large pot on a kicker, or having one’s 9/9 creature sent to an early grave with a 2-mana-cost Terror spell, hurts in a way that even a non-player, unfamiliar with the details of the rules, can observe. People play different games for different reasons, and I certainly don’t consider myself qualified to call one set of reasons superior over any other.

Software

Ok, so let’s talk about programming. Object-oriented programming is much like Magic: there are so many optional rules and modifications, often contradicting, available. There are far too many strategies for me to list them here and do them justice. Magic, just because its game world is so large, has inevitable failures of composition: cards that are balanced on their own but so broken in combination that one or the other must be banned by Magic‘s central authority. Almost no one alive knows “the whole game” when it comes to Magic, because there are about twenty thousand different cards, many introducing new rules that didn’t exist when the original game came out, and some pertaining to rules that exist only on cards written in a specific window of time. People know local regions of the game space, and play in those, but the whole game is too massive to comprehend. Access to game resources is also limited: not everyone can have a Black Lotus, just as not everyone can convince the boss to pay them to learn and use a coveted and highly-compensated but niche technology.

In Magic, people often play to obliterate their opponents. That’s not because they’re uncivilized or mean. The game is so random and uncontrollable (as opposed to Go, with perfect information) that choosing to play artfully rather than ruthlessly is volunteering to lose.

Likewise, object-oriented programmers often try to obliterate the problem being solved. They aren’t looking for the minimal sufficient solution. It’s not enough to write a 40-line script that does the job. You need to pull out the big guns: design patterns that only five people alive actually understand (and, for which, 4 of those 5 have since decided that they were huge mistakes). You need to have Factories generating Factories, like Serpent Generators popping out 1/1 Serpent counters. You need to use Big Products like Spring and Hibernate and Mahout and Hadoop and Lucene regardless of whether they’re really necessary to solve the problem at hand. You need to smash code reviews with “-1; does not use synchronized” on code that will probably never be multi-threaded, and you need to build up object hierarchies that would make Lord Kefka, the God of Magic from Final Fantasy VI, proud. If your object universe isn’t “fun”, with ZombieMaster classes that immediately increment values of fields in all Zombies in the heap in their constructors and decrement those same fields in their finalizers, then you’re not doing OOP– at least, as it is practiced in the business world– right, because you’re not using any of the “fun” stuff.

Object-oriented programmers play for the 60-point Fireballs and for complex machinery. The goal isn’t to solve the problem. It’s to annihilate it and leave a smoldering crater where that problem once stood, and to do it with such impressive complexity that future programmers can only stand in awe of the titanic brain that built such a powerful war machine, one that has become incomprehensible even to its creator.

Of course, all of this that I am slinging at OOP is directed at a culture. Is object-oriented programming innately that way? Not necessarily. In fact, I think that it’s pretty clear Alan Kay’s vision (“IQ is a lead weight”) was the opposite of that. His point was that, when complexity occurs, it should be encapsulated behind a simpler interface. That idea, now uncontroversial and realized within functional programming, was right on. Files and sockets, for example, are complex beasts in implementation, but manageable specifically because they tend to conform to simpler and well-understood interfaces: you can read without having to care whether you’re manipulating a robot arm in physical space (i.e. reading a hard drive) or pulling data out of RAM (memory file) or taking user input from the “file” called “standard input”. Alan Kay was not encouraging the proliferation of complex objects; he was simply looking to build a toolset that enables to people to work with complexity when it occurs. One should note that major object-oriented victories (concepts like “file” and “server”) are no longer considered “object-oriented programming”, just as “alternative medicine” that works is recognized as just “medicine”.

In opposition to the object-oriented enterprise fad that’s losing air but not fast enough, we have functional programming. I’m talking about Haskell and Clojure and ML and Erlang. In them, there are two recommended design patterns: noun (immutable data) and verb (referentially transparent function) and because functions are first-class citizens, one is a subcase of the other. Generally, these languages are simple (so simple that Java programmers presume that you can’t do “real programming” in them) and light on syntax. State is not eliminated, but the language expects a person to actively manage what state exists, and to eliminate it when it’s unnecessary or counterproductive. Erlang’s main form of state is communication between actors; it’s shared-nothing concurrency. Haskell uses a simple type class (Monad) to tackle head-on the question of “What is a computational effect?”, one that most languages ignore. (The applications of Monad can be hard to tackle at first, but the type class itself is dead-boring simple, with two core methods, one of which is almost always trivial.) While the implementations may be very complex (the Haskell compiler is not a trivial piece of work) the computational model is simple, by design and intention. Lisp and Haskell are languages where, as with Go or Chess, it’s relatively easy to teach the rules while it takes time to master good play.

While the typical enterprise Java programmer looks for an excuse to obliterate a simple ETL process with a MetaModelFactory, the typical functional programmer tries to solve almost everything with “pure” (referentially transparent) functions. Of course, the actual world is stateful and most of us are, contrary to the stereotype of functional programmers, quite mature about acknowledging that. Working with this “radioactive” stuff called “state” is our job. We’re not trying to shy away from it. We’re trying to do it right, and that means keeping it simple. The $200/hour Java engineer says, “Hey, I bet I could use this problem as an excuse to build a MetaModelVisitorSingletonFactory, bring my inheritance-hierarchy-record into the double-digits, and use Hibernate and Hadoop because if I get those on my CV, I can double my rate.” The Haskell engineer thinks hard for a couple hours, probably gets some shit during that time for not seeming to write a lot of code, but just keeps thinking… and then realizes, “that’s just a Functor“, fmaps out a solution, and the problem is solved.

While not every programmer lives up to this expectation at all times, functional programming values simple, elegant solutions that build on a small number of core concepts that, once learned, are useful forever. We don’t need pre-initializers and post-initializers; tuples and records and functions are enough for us. When we need big guns, we’ve got ’em. We have six-parameter hyper-general type classes (like Proxy in the pipes library) and Rank-N types and Template Haskell and even the potential for metaprogramming. (Haskell requires the program designer to decide how much dynamism to include, but a Haskell program can be as dynamic as is needed. A working Lisp can be implemented in a few hundred lines of Haskell.) We even have Data.Dynamic in case one absolutely needs dynamic typing within Haskell. If we want what object-oriented programming has to offer, we’ll build it using existential types (as is done to make Haskell’s exception types hierarchical, with SomeException encompassing all of them) and Template Haskell and be off to the races. We rarely do, because we almost never need it, and because using so much raw power usually suggests a bad design– a design that won’t compose well or, in more blunt terms, won’t play well with others.

The difference between games and programming

Every game has rules, but Games (as a concept) has no rules. There’s no single principle that unifies games that each game must have. There are pure-luck games and pure-skill games, there are competitive games and cooperative games (where players win or lose as a group). There are games without well-defined objective functions. There are even games where some players have objective functions and some don’t, as with 2 Rooms and a Boom‘s “Drunk” role. Thus, there isn’t an element of general gameplay that I can single out and say, “That’s bad.” Sometimes, compositional failures and broken strategies are a feature, not a bug. I might not like Magic‘s “mana screw” (most people consider it a design flaw) but I could also argue that the intermittency of deck performance is part of what makes that game addictive (see: variable-schedule reinforcement, and slot machines) and that it’s conceivable that the game wouldn’t have achieved a community of such size had it not featured that trait.

Programming, on the other hand, isn’t a game. Programs exist to do a job, and if they can’t do that job, or if they do that job marginally well but can never be improved because the code is incomprehensible, that’s failure.

In fact, we generally want industrial programs to be as un-game-like as possible. (That is not to say that software architects and game designers can’t learn from each other. They can, but that’s another topic for another time.) The things that make games fun make programs infuriating. Let me give an example: NP-complete problems are those where checking a solution can be done efficiently but finding a solution, even at moderate problem size, is (probably) intractable. Yet, NP-complete (and harder) problems often make great games! Go is PSPACE-complete, meaning that it’s (probably) harder than NP-complete, so exhaustive search will most likely never be an option. So is Microsoft’s addictive puzzle game Minesweeper. Tetris and Sudoku are likewise computationally hard. (Chess is harder, in this way, to analyze, because computational hardness is defined in terms of asymptotic behavior and there’s no incontrovertibly obvious way to generalize it beyond the standard-issue 8-by-8 board.) It doesn’t have to be such a way, because human brains are very different from computers, and so there’s no solid reason why a game’s NP-completeness (or lack thereof) would bear on its enjoyability to humans, yet the puzzle games that are most successful tend to be the ones that computers find difficult. Games are about challenges like computational difficulty, imperfect information (network partitions), timing-related quirks (“race conditions” in computing), unpredictable agents, unexpected strategic interactions and global effects (e.g. compositional failures), and various other things that make a human social process fun, but often make a computing system dangerously unreliable. We generally want games to have traits that would be intolerable imperfections in any other field of life. The sport of Soccer is one where one’s simulated life depends on the interactions between two teams and a tiny ball. Fantasy role-playing games are about fighting creatures like dragons and beholders and liches that would cause us to shit our pants if we encountered them on the subway because, in real life, even a Level 1 idiot with a 6-inch knife is terrifying.

When we encounter code, we often want to reason about it. While this sounds like a subjective goal, it actually has a formal definition. The bad news: reasoning about code is mathematically impossible. Or, more accurately, to ask even the simplest questions (“does it terminate?” “is this function’s value ever zero?”) about an arbitrary program in any Turing-complete language (as all modern programming languages are) is impossible. We can write programs for which it is impossible to know what they do, except empirically, and that’s deeply unsatisfying. If we run a program that fails to produce a useful result for 100 years, we still cannot necessarily differentiate between a program that produces a useful result after 100.1 years and one that loops forever.

If the bad news is that reasoning about arbitrary code is impossible, the good news is that humans don’t write arbitrary code. We write code to solve specific problems. Out of the entire space of possible working programs on a modern machine, less than 0.000000001 percent (with many more zeros) of possible programs are useful to us. Most syntactically correct programs generate random garbage, and the tiny subspace of “all code” that we actually use is much more well-behaved. We can create simple functions and effects that we understand quite well, and compose them according to rules that are likewise well-behaved, and achieve very high reliability in systems. That’s not how most code is actually written, especially not in the business world, the latter being dominated by emotional deadlines and hasty programming. It is, however, possible to write specific code that isn’t hard to reason about. Reasoning about the code we actually care about is potentially possible. Reasoning about randomly-generated syntactically correct programs is a fool’s errand and mathematically impossible to achieve in all cases, but we’re not likely to need to do that if we’re reading small programs written with a clear intention.

So, we have bad news (reasoning about arbitrary code is formally impossible) and good news (we don’t write “arbitrary code”) but there’s more bad news. As software evolves, and more programmers get involved, all carrying different biases about how to do things, code has a tendency to creep toward “arbitrary code”. The typical 40-year-old legacy program doesn’t have a single author, but tens or hundreds of people who were involved. This is why Edsger Dijkstra declared the goto statement to be harmful. There’s nothing mathematically or philosophically wrong with. In fact, computers use it in machine code all the time, because that’s what branching is, from a CPU’s perspective. The issue is the dangerous compositional behavior of goto— you can drop program control into a place where it doesn’t belong and get nonsensical behavior– combined with the tendency of long-lived, multi-developer programs using goto to “spaghettify” and reach a state where it is incomprehensible, reminiscent of a randomly-generated (or, worse yet, “arbitrary” under the mathematician’s definition) program. When Dijkstra came out against goto, his doing so was as controversial as anything that I might say about the enterprise version of object-oriented programming today– and yet, he’s now considered to have been right.

Comefrom 10

Where is this whole argument leading? First, there’s a concept in game design of “dryness”. A game that is dry is abstract, subtle, generally avoiding or limiting the role of random chance, and while the game may be strategically deep, it doesn’t have immediate thematic appeal. Go is a great game, and it’s also very dry. It has white stones and black stones and a board, but that’s it. No wizards, no teleportation effects, not even castling. You put a stone on the board and it sits there forever (unless the colony is surrounded and it dies). Go also values control and elegance, as programmers should. We want our programs to be “dry” and boring. We want the problems that we solve to be interesting and complex, but the code itself should be so elegant as to be “obvious”, and elegant/obvious things are (in this way) “boring”. We don’t want that occurrence where a ZombieMaster comes into play (or the heap) and causes all the Zombies to have different values in otherwise immutable fields. That’s “fun” in a game, where little is at stake and injections of random chance (unless we want a very-dry game like Go) are welcome. It’s not something that we want in our programs. The real world will throw complexity and unpredictability at us: nodes in our networks will fail, traffic will spike, and bugs will occur in spite of our best intentions. The goal of our programs should be to manage that, not to create more of it. The real world is so damn chaotic that programming is fun even when we use the simplest, most comprehensible, “dryest” tools like immutable records and referentially transparent functions.

So, go forth and write more functions and no more SerpentGeneratorGenerators or VibratorVisitorFactory patterns.

Academia, the Prisoner’s Dilemma, and the fate of Silicon Valley

In 2015, the moral and cultural failure of American academia is viewed as a fait accompli. The job market for professors is terrible and will remain so. The academy has sold out two generations already, and shows no sign of changing course. At this point, the most prominent function of academia (as far as the social mainstream is concerned) isn’t to educate people but to sort them so the corporate world knows who to hire. For our society, this loss of academia is a catastrophe. Academia has its faults, but it’s too important for us to just let it die.

To me, the self-inflicted death of academia underlies the importance of social skills. Now, I’m one of those people who came up late in terms of social interaction. I didn’t prioritize it, when I was younger. I focused more on knowledge and demonstration of intelligence than on building up my social abilities. I was a nerd, and I’m sure that many of my readers can relate to that. What I’ve learned, as an adult, is that social skills matter. (Well, duh?) If you look at the impaired state that academia has found itself in, you see how much they matter.

I’m not talking about manipulative social skills, nor about becoming popular. That stuff helps an individual in zero-sum games, but it doesn’t benefit the collective or society at large. What really matters is a certain organizational (or, to use a term I’ll define later, coordinative) subset of social skills that, sadly, isn’t valued by people like academics or software engineers, and both categories suffer for it.

Academia

How did academia melt down? And why is it reasonable to argue that academics are themselves at fault? To make it clear, I don’t think that this generation of dominant academics is to blame. I’d say that academia’s original sin is the tenure system. To be fair, I understand why tenure is valuable. At heart, it’s a good idea: academics shouldn’t lose their jobs (and, in a reputation-obsessed industry, such a loss often ends their careers) because their work pulls them in a direction disfavored by shifting political winds. The problem is that tenure allowed the dominant, entrenched academics to adopt an attitude– research über alles— that hurt the young, especially in the humanities. Academic research is genuinely useful, whether we’re talking about particle physics or medieval history. It has value, and far more value than society believes that it has. The problem? During the favorable climate of the Cold War, a generation of academics decided that research was the only part of the job that mattered, and that teaching was grunt work to be handed off to graduate students or minimized. Eventually, we ended up with a system that presumed that academics were mainly interested in research, and that therefore devalued teaching in the evaluation of academics, so that even the young rising academics (graduate students and pre-tenure professors) who might not share this attitude still had to act according to it, because the “real work” that determined their careers was research.

The sciences could get away with the “research über alles” attitude, because intelligent people understand that scientific research is important and worth paying for. If someone blew off Calculus II but advanced the state of nuclear physics, that was tolerated. The humanities? Well, I’d argue that the entire point of humanities departments is the transmission of culture: teaching and outreach. So, while the science departments could get away with a certain attitude toward their teaching and research and the relative importance of each– a “1000x” researcher really is worth his keep even if he’s a terrible teacher– there was no possible way for humanities departments to pull it off.

To be fair, not every academic individually feels negatively about teaching. Many understand its importance and wish that it were more valued, and find it upsetting that teaching is so undervalued, but they’re stuck in a system where the only thing that matters, from a career perspective, is where they can get their papers published. And this is the crime of tenure: the young who are trying to enter academia are suffering for the sins of their (tenured, safe) predecessors.

Society responded to the negative attitude taken toward teaching. The thinking was: if professors are so willing to treat teaching as commodity grunt work, maybe they’re right and it is commodity grunt work. Then, maybe we should have 300 students in a class and we should replace these solidly middle-class professorships with adjunct positions. It’s worth pointing out that adjunct teaching jobs were never intended to be career jobs for academics. The purpose of adjunct teaching positions was to allow experienced non-academic practitioners to promote their professional field and to share experience. (The low salaries reflect this. These jobs were intended for successful, wealthy professionals for whom the pay was a non-concern.) They were never intended to facilitate the creation of an academic underclass. But, with academia in such a degraded state, they’re now filled with people who intended to be career academics.

Academia’s devolution is a textbook case of a prisoner’s dilemma. The individual’s best career option is to put 100% of his focus on research, and to do the bare minimum when it comes to teaching. Yet, if every academic does that, academia becomes increasingly disliked and irrelevant, and the academic job market will be even worse for the next cohort. The health of the academy requires a society in which the decision-makers are educated and cultured (which we don’t have). People won’t continue to pay for things that seem unimportant to them, because they’ve never been taught them. So, in a world where even most Silicon Valley billionaires can’t name seven of Shakespeare’s plays and many leading politicians couldn’t even spell the playwright’s name, what should we expect other than academia’s devolution?

Academia still exists, but in an emasculated form that plays by the rules of the corporate mainstream. Combining this with the loss of vision and long-term thinking in the corporate world (the “next quarter” affliction) we have a bad result for academia and society as a whole. Those first academics who created the “research über alles” culture doomed their young to a public that doesn’t understand their value, declining public funding, adjunct hell and second and third post-docs. With the job market in tatters, professors became increasingly beholden to corporations and governments for grant money, and intellectual conformism increased.

I am, on a high level, on the side of the academics. There should be more jobs for them, and they should get more respect, and they’re suffering for an attitude that was copped by their privileged antecedents in a different time, with different rules. A tenured professor in the 1970s had a certain cozy life that might have left him feeling entitled to blow off his teaching duties. He could throw 200 students into an auditorium, show up 10 minutes late, make it obvious that he felt he had better things to do than to teach undergraduates… and it really didn’t matter to him that one of those students was a future state senator who’d defund his university 40 years later. In 2015, hasty teaching is more of an effect of desperation than arrogance, so I don’t hold it against the individual academic. I also believe that it is better to fix academia than to write it off. What exists that can replace it? I don’t see any alternatives. And these colleges and universities (at least, the top 100 or so most prestigious ones) aren’t going to go away– they’re too rich, and Corporate America is too stingy to train or to sort people– so we might as well make them more useful.

The need for coordinative action

Individuals cannot beat a Prisoner’s Dilemma. Coordination and trust are required in order to get a positive outcome. Plenty of academics would love to put more work into their teaching, and into community outreach and other activities that can increase the relevance and value assigned to their work, but they don’t feel like they’ll be able to compete with those who put a 100% focus on research and publication (regardless of the quality of the research, because getting published is what matters). And they’re probably right. They’re in a broken system, and they know it, but opposing it is individually so damaging, and the job market is so competitive, that almost no one can do anything but the individually beneficial action.

Academic teaching suffers from the current state of affairs, but the quality of research is impaired as well. It might have made sense, for individual benefit, for a tenured academic in the 1970s to blow off teaching. But this, as I’ve discussed, only led society to undervalue what was supposed to be taught. The state now for academia has become so bad that researchers spend an ungodly amount of time begging for money. Professors spend so much time in fundraising that many of them no longer perform research themselves; they’ve become professional managers who raise money and take credit for their graduate students’ work. To be truthful, I don’t think that this dynamic is malicious on the professors’ part. It’s pretty much impossible to put yourself through the degrading task of raising money and to do creative work at the same time. It’s not that they want to step back and have graduate students do the hard work; it’s that most of them can’t, due to external circumstances that they’d gladly be rid of.

If “professors” were a bloc that could be ascribed a meaningful will, it’s possible that this whole process wouldn’t have happened. If they’d perceived that devaluing teaching in the 1970s would lead to an imploded job market and funding climate two decades later, perhaps they wouldn’t have made the decisions that they did. Teach now, or beg later. Given that pair of choices, I’ll teach now. Who wouldn’t? In fact, I’m sure that many academics would love to put all the time and emotional energy wasted on fundraising into their teaching, instead, if it would solve the money problem now instead of 30 years from now. The tenure system that allowed a senior generation of academics to run up a social debt and hand their juniors the bill, and academia’s stuck in a shitty situation that it can’t work its way out of. So what can be done about it?

Coordinative vs. manipulative social skills

It’s well-understood that academics have poor social skills. By “well understood”, I don’t mean that it’s necessarily true, but it’s the prevailing stereotype. Do academics lack social skills? In order to answer this question, I’m going to split “social skills” up into three categories. (There are certainly more, and these categories aren’t necessarily mutually exclusive.) The categories are:

  • interpersonal: the ability to get along with others, be well-liked, make and keep friends. This is what most people think of when they judge another person’s “social skills”.
  • coordinative: the ability to resolve conflicts and direct a large group of people toward a shared interest.
  • manipulative: the ability to exploit others’ emotions and get them to unwittingly do one’s dirty work.

How do academics stack up in each category? I think that, in terms of interpersonal social skills, academics follow the standard trajectory of highly intelligent people: severe social difficulty when young that is worst in the late teens, and resolves (mostly) in the mid- to late 20s. Why is this so common a pattern? There’s a lot that I could say about it. (For example, I suspect that the social awkwardness of highly intelligent people is more likely to be a subclinical analogue of a bipolar spectrum disorder than a subclinical variety of autism/Asperger’s.) Mainly, it’s the 20% Time (named in honor of Google) Effect. That 10 or 20 percent social deficit (whether you attribute it to altered consciousness, via a subclinical bipolar or autism-spectrum disorder, or whether you attribute it to just having other interests) that is typical in highly intelligent people is catastrophic in adolescence but a non-issue in adulthood. A 20-year-old whose social maturity is that of a 17-year-old is a fuckup; a 40-year-old with the social maturity of a 34-year-old would fit in just fine. Thus, I think that, by the time they’re on the tenure track (age 27-30+) most professors are relatively normal when it comes to interpersonal social abilities. They’re able to marry, start families, hold down jobs, and create their own social circles. While it’s possible that an individual-level lack of interpersonal ability (microstate) is the current cause for the continuing dreadful macrostate that academia is in, I doubt it.

What about manipulative social skills? Interpersonal skills probably follow a bell curve, whereas manipulative social skill seems to have a binary distribution: you have civilians, who lack them completely, and you have psychopaths, who are murderously good at turning others into shrapnel. Psychopaths exist, as everywhere, in academia, and they are probably not appreciably less or more common than in other industries. Since academia’s failure is the result of a war waged on it by external forces (politicians devaluing and defunding it, and corporations turning it toward their own coarser purposes) I think it’s unlikely that academia is suffering from an excess of psychopaths within its walls.

What academia is missing is coordinative social skill. It has been more than 30 years since academia decided to sell out its young, and the ivory tower has not managed to fix its horrendous situation and reverse the decline of its relevance. Academia has the talent, and it has the people, but it doesn’t have what it takes to get academics working together to fight for their cause, and to reward the outreach activities (and especially teaching) that will be necessary if academia wants to be treated as relevant, ever again.

I think I can attribute this lack of coordinative social skill to at least two sources. The first is an artifact of having poor interpersonal skills in adolescence, which is when coordinative skills are typically learned. This can be overcome, even in middle or late adulthood, but it generally requires that a person reach out of his comfort zone. Interpersonal social skills are necessary for basic survival, but coordinative social skills are only mandatory for people who want to effect change or lead others, and not everyone wants that. So, one would expect that some number of people who were bad-to-mediocre, interpersonally, in high school and college, would maintain a lasting deficit in coordinative social skill– and be perfectly fine with that.

The second is social isolation. Academia is cult-like. It’s assumed that the top 5% of undergraduate students will go on to graduate school. Except for the outlier case in which one is recruited for a high-level role at the next Facebook, smart undergraduate students are expected to go straight into graduate school. Then, to leave graduate school (which about half do, before the PhD) is seen as a mark of failure. Few students actually fail out on a lack of ability (if you’re smart enough to get in, you can probably do the work) but a much larger number lose motivation and give up. Leaving after the PhD for, say, finance is also viewed as distasteful. Moreover, while it’s possible to resume a graduate program after a leave of absence or to join a graduate program after a couple years of post-college life, those who leave the academic track at any time after the PhD are seen as damaged goods, and unhireable in the academic job market. They’ve committed a cardinal sin: they left. (“How could they?”) Those who leave academia are regarded as apostates, and people outside of academia are seen as intellectual lightweights. With an attitude like that, social isolation is expected. People who have started businesses and formed unions and organized communities could help academics get out of their self-created sand trap of irrelevance. The problem is that the ivory tower has such a culture of arrogance that it will never listen to such people.

Seem familiar?

Now, we focus on Silicon Valley and the VC virus that’s been infecting the software industry. If we view the future as linear, Silicon Valley seems to be headed not for irrelevance or failure but for the worst kind of success. Of course, history isn’t linear and no one can predict its future. I know what I want to happen. As for what will, and when? Some people thought I made a fool of myself when I challenged a certain bloviating, spoiled asshat to a rap duel– few people caught on to the logic of what I was doing– and I’m not going to risk making a fool of myself, again, by making predictions.

Software engineers, like academics, have a dreadful lack of coordinative social skill. Not only that, but the Silicon Valley system, as it currently exists, requires that. If software engineers had the collective will to fight for themselves, they’d be far better treated and be running the place, and it would be a much better world overall but the current VC kingmakers wouldn’t be happy. Unfortunately, the Silicon Valley elite has done a great job of dividing makers on all sorts of issues: gender, programming languages, the H1-B program, and so on… all the while, the well-connected investors and their shitty paradrop executive friends make tons of money while engineers get abused– and respond by abusing each other over bike-shed debates like code indentation. When someone with no qualifications other than going to high school with a lead investor is getting a $400k-per-year VP/Eng job and 1% of the equity, and engineers are getting 0.02%, who fucking cares about tabs versus spaces?

Is Silicon Valley headed down the same road as academia? I don’t know. The analogue of “research über alles” seems to be a strange attitude that mixes young male quixotry, open-source obsession– and I think that open-source software is a good thing, but less than 5% of software engineers will ever be paid to work on it, and not everyone without a Github profile is a loser– and crass commercialism couched in allusions to mythical creatures. (“Billion-dollar company” sounds bureaucratic, old, and lame; “unicorn” sounds… well, incredibly fucking immature if you ask me, but I’m not the target market.) If that culture seems at odds with itself, that’s an accurate perception. It’s intentionally muddled, self-contradictory, and needlessly divisive. The culture of Silicon Valley engineering is one created by the colonial overseers, and not by the engineers. Programmers never liked open-plan offices and still don’t like them, and “Scrum” (at least, Scrum in practice) is just a way to make micromanagement sound “youthy”.

For 1970s academia, there was no external force that tried to ruin it or (as has been done with Silicon Valley) turn it into an emasculated colonial outpost for the mainstream business elite. Academia created its own destruction, and the tenure system allowed it by enabling the arrogance of the established (which ruined the job prospects of the next generation). It was, I would argue, purely a lack of coordinative social skill, brought on by a cult-like social isolation, that did this. I would argue, though, that Silicon Valley was destroyed (and so far, the destruction is moral but not yet financial, insofar as money is still being made, just by the wrong people) intentionally. We only need examine one dimension of social skill– a lack of coordinative skill– to understand academia’s decline. In Silicon Valley, there are two at play: the lack of coordinative social skill among the makers who actually build things, and the manipulative social skills deployed by psychopaths, brought in by the mainstream business culture, to keep the makers divided over minutiae and petty drama. What this means, I am just starting to figure out.

Academia is a closed system and largely wants to be so. Professors, in general, want to be isolated from the ugliness of the mainstream corporate world. Otherwise, they’d be in it, making three times as much money on half the effort. However, the character of Silicon Valley makers (as opposed to the colonial overseers) tends to be ex-academic. Most of us makers are people who were attracted to science and discovery and the concept of a “life of the mind”, but left the academy upon realizing its general irrelevance and decline. As ex-academics, we simultaneously have an attitude of rebellion against it and a nostalgic attraction to its better traits including its “coziness”. What I’ve realized is that the colonial overseers of Silicon Valley are very adept at exploiting this. Take the infantilizing Google Culture, which provides ball pits and free massage (one per year) but has closed allocation and Enron-style performance reviews. Google, knowing that many of its best employees are ex-academics– I consider grad-school dropouts to be ex-academic– wants to create the cult-like, superficially cozy world that enables people to stop asking the hard questions or putting themselves outside of their comfort zones (which seems to be a necessary pre-requisite for developing or deploying coordinative social skills).

In contrast to academia, Silicon Valley makers don’t want to be in a closed system. Most of these engineers want to have a large impact on the world, but a corporation can easily hack them (regardless of the value of the work they’re actually doing) by simply telling them that they’re having an effect on “millions of users”. This enables them to get a lot of grunt work done by people who’d otherwise demand far more respect and compensation. This ruse is similar to a cult that tells its members that large donations will “send out positive energy waves” and cure cancer. It can be appealing (and, again, cozy) to hand one’s own moral decision-making over to an organization, but it rarely turns out well.

Fate

I’ve already said that I’m not going to try to predict the future, because while there is finitude in foolishness, it’s very hard to predict exactly when a system runs out of greater fools. I don’t think that anyone can do that reliably. What I will do is identify points of strain. First, I don’t think that the Silicon Valley model is robust or sustainable. Once its software engineers realize on a deep level just how stacked the odds are against them– that they’re not going to be CEOs inside of 3 years– it’s likely either to collapse or to be forced to evolve into something that has an entirely different class of people in charge of it.

Right now, Silicon Valley prevents engineer awakening through aggressive age discrimination. Now, ageism is yet another trait of software culture that comes entirely from the colonial overseers. Programmers don’t think of their elders as somehow defective. Rather, we venerate them. We love taking opportunities to learn from them. No decent programmer seriously believes that our more experienced counterparts are somehow “not with it”. Sure, they’re more expensive, but they’re also fucking worth it. Why does the investor class need such a culture of ageism to exist? It’s simple. If there were too many 50-year-old engineers– who, despite being highly talented, never became “unicorn” CEOs, either because of a lack of interest or because CEO slots are still quite rare– kicking around the Valley, then the young’uns would start to realize that they, also, weren’t likely enough to become billionaires from their startup jobs to justify the 90-hour weeks. Age discrimination is about hiding the 50th-percentile future from the quixotic young males that Silicon Valley depends on for its grunt work.

The problem, of course, with such an ageist culture is that it tends to produce bad technology. If there aren’t senior programmers around to mentor the juniors and review the code, and if there’s a deadline culture (which is usually the case) then the result will be a brittle product, because the code quality will be so poor. Business people tend to assume that this is fixable later on, but often it’s not. First, a lot of software is totaled, by which I mean it would take more time and effort to fix it than to rewrite it from scratch. Of course, the latter option (even when it is the sensible one) is so politically hairy as to be impractical. What often happens, when a total rewrite (embarrassing the original architects) is called, is that the team that built the original system throws so much political firepower (justification requests, legacy requirements that the new system must obey, morale sabotage) at it that the new-system team is under even tighter deadlines and suffers from more communication failures than the original team did. The likely result is that the new system won’t be any good either. As for maintaining totaled software for as long as it lives, these become the projects that no one wants to do. Most companies toss legacy maintenance to their least successful engineers, who are rarely people with the skills to improve it. With these approaches blocked, external consultants might be hired. The problem there is that, while some of these consultants are worth ten times their hourly rate, many expensive software consultants are no good at all. Worse yet, business people are horrible at judging external consultants, while the people who have the ability to judge them (senior engineers) have a political stake and therefore, in evaluating and selecting external code fixers, will affected by the political pressures on them. The sum result of all of this is that many technology companies built under the VC model are extremely brittle and “technical debt” is often impossible to repay. In fact, “technical debt” is one of the worst metaphors I’ve encountered in this field. Debt has a known interest rate that is usually between 0 and 30 percent per year; technical debt has a usurious and unpredictable interest rate.

So what are we seeing, as the mainstream business culture completes its colonization of Silicon Valley? We’ve seen makers get marginalized, we’ve seen an ageism that is especially cruel because it takes so many years to become any good at programming, and we’ve seem increasing brittleness of the products and businesses created, due to the colonizers’ willful ignorance of the threat posted by technical debt.

Where is this going? I’m not sure. I think it behooves everyone who is involved in that game, however, to have a plan should that whole mess go into a fiery collapse.