“Did you call [female tech personality] a cunt?”

The answer is: No. Of course not.

So… just why was I ever asked such an odd question?

 

A few months ago on Y Combinator’s forum, Hacker News, I pulled a word out of Middle English: queynte. This word is neither profane nor sexual. It’s not used much today, but was pronounced identically to its extant adjectival form, “quaint”. The most faithful modern translation of the noun would probably be “ornament” or “device”.

Saying queynte would have been a non-event if I were a civilian. However, my half decade of opposition to Silicon Valley’s social injustices has made me somewhat of a public figure, and it has taken some time for me to adapt to that. This puts a lot of extra attention on everything I say. There are a number of people who would just love it for me to fuck up completely and say something that’s actually offensive, because it would undermine the moral credibility that I have and that they, despite being the social and economic leaders of the technology industry, don’t have and (because of their unethical actions) never will have.

As my reach and publicity grow, I have to be increasingly mindful of the ways in which a statement– any statement I make, really– can be taken out of context. I’m still getting used to this.

The word queynte is phonetically close to cunt. I don’t shy away from profanity in general, but cunt is used in a number of ways, some of which are extremely offensive. So I tend to avoid it.

Did I mean the word queynte as any kind of slur against anyone’s gender? Of course not. (If anyone ever thought otherwise, then I’m truly sorry.)

Do I consider it acceptable to demean people simply because they are women? No, and if that isn’t obvious from everything else that I’ve written, then I’m embarrassed.

Did I anticipate that a hardcore misogynist like Dan Gackle would take it as his excuse to start throwing around the word “cunt”? (He’s edited his reply considerably, and now seems to be taking a mansplaining stand against that word.) No. Perhaps I should have expected that, but I didn’t.

Should I have avoided “queynte”, seeing how open it is to misinterpretation, and used a different word (like “jerk”) in its place? Yes. I think we can all agree on that.

I have asked Mr. Gackle, multiple times, to remove both my misconstrued comments and his belligerent ones. He declined on each occasion.

That’s all that I’m going to say on this topic.

 

Actually, I should say one more thing.

Dan Gackle tried to ruin my reputation. I can’t that let that slide. I’m trying really hard to get out of the internet drama business, but sometimes a haughty fucker needs to get put in his place, and this is one of those times.

I don’t wish to waste breath on Dan Gackle, though. He’s not interesting. I mean, his pompous, sniping sanctimony makes him a negative presence on Hacker News, but he’s not exactly relevant in the real world. Moreover, since no one likes him to begin with, there’s also not much of a point in taking him down; no opinions really need to be changed. So let’s talk about the creeps who sent him: Y Combinator (“YC”).

It’s time for Y Combinator to die in a taint fire. Y Combinator has ruined startups for at least a generation. Its growth-at-any-cost ethos has enabled cultural sloppiness and ethical turpitude to reach shameful extremes, it has generated some of the worst companies of the 21st century, and it has created a culture of ageism, sexism, racism and classism that has made technology people look like the worst people in the world– all of this on no less than a global stage.

Y Combinator’s main product is division. In fact, that’s its only meaningful purpose. Y Combinator is a bona fide rat’s nest of catty drama, backbiting, lapses of professionalism, junior-high antics, and unjustified sanctimony. It exists to pit against each other the people doing the actual work, for the benefit of Paul Graham and the little boys whom he’s chosen as his protégés.

Some of the divisions that YC is most eager to create and exploit are: founders versus employees; in-crowds versus out-crowds; “brogrammers” versus women; young “hackers” versus “old hands” (meaning 30 and up); Rubyists versus Java users versus Lispers; West Coast versus East Coast versus what they call “flyover country”; citizens versus “H-1Bs”. Why? Many of the top people in Silicon Valley are terrified of any sense of collective identity emerging among the working classes. Unions, guilds, professional organizations… they don’t want any of that, and they punish people severely for even suggesting such ideas. In order to make it less likely that such institutions are created, they divide programmers against each other along any cleavage they can find.

This probably isn’t surprising, but real venture capitalists don’t like Y Combinator. The YC people are considered, by the rest of Silicon Valley, to be Northern California’s version of Donald Trump.

Still, Y Combinator is permitted to exist. The real venture capitalists could kill YC if they wanted to, just by refusing to fund or purchase any of its excreta. It wouldn’t be hard and it wouldn’t take long. So why don’t they? Why do they permit it to have another day?

The answer is that, right now, Y Combinator is more effective at dividing labor against itself than any other organization in the technology industry. The more progressive venture capitalists (who often privately oppose the divisive, exclusionary, and anti-intellectual culture that has become dominant, in venture-funded technology, over the past ten years) just refuse to work with the YCs, for that reason. The ones with a strong unionbusting impulse, on the other hand, recognize that Y Combinator is uncannily effective at creating the kinds of bitter sectarian divisions that keep technologists and makers from organizing around shared economic, social, or political interests.

If you want to go after Y Combinator, don’t just create another incubator. That market is flooded; you won’t win. Instead, defeat (or “disrupt”) its culture of arrogance, immaturity, anti-intellectualism, and divisiveness. Build something that unites, rather than something that divides.

-32-

I’ve had a lot of people ask me recently about why I’ve taken down all of my old posts.

Yes, I will probably republish the ones that are of highest quality, possibly with some editing for clarity and brevity. I’ll keep writing, but it’s time to move away from the identity of being “a blogger” and especially that of being “a tech blogger” (ugh).

I don’t intend, unless there’s a really good reason to do so, to blog further for a while.

There are a number of reasons, and I’ll put them roughly into these categories:

  • winning gracefully.
  • personal growth.
  • capabilities and identity.
  • separating and streamlining.

Winning gracefully

For exposing unethical behaviors in Silicon Valley, I’ve had everything from (a) death threats to (b) attempts (albeit unsuccessful) to bribe employers to fire me to (c) most publicly, being banned on Quora under false pretenses. In fact, right now there’s a man named “Scott Welch” being paid to slander me on Quora.

The Paul Grahams of the world are still rich, and they still control thousands of jobs, but they’re not loved. They want to be perceived not merely as wealthy businessmen, but as public intellectuals, visionary statesmen, and paragons of virtue. I am the person they blame most for their failure to achieve this status. They should blame themselves. Still, in an interpretation of history that these people believe, I made them fail at getting what they really wanted. That is, of course, why they’ve come after me.

It’s pretty clear that I’ve won. I broke their momentum. If I weren’t having an effect on how these people are perceived, they’d just ignore me. I’ve exposed so many unethical business practices that billionaire venture capitalists shit their pants when they see my name. That’s no small achievement.

If I go any further down that line, though, I’ll just be running up the score. I’ve proven everything that I intended to prove, several times over. Besides, given that the prizes come in such lovely packages as death threats, I’m finding it hard to convince myself that more of this kind of winning is desirable. There are other forms of winning that I might grow to prefer.

Personal growth

Should it really be a person’s goal to make the bad guys fail? Now that I’m older, I think that there’s more nobility in helping the good people succeed. Overthrow is often overrated. It’s better to replace than oppose. Besides, we don’t need one more person taking down Silicon Valley. It’s going to fall, all on its own. Instead, we need people who can focus on building things that are better, and that’s what I need to become.

With the next phase of my life, I’d like to focus on helping the good people succeed, rather than on making the bad guys fail. That can be harder, but it’s a lot more interesting to me.

Capabilities and identity

There are a number of things that I’m very good at. Most of these, to boot, are unlikely to bring death threats.

  • Writing, with successes in fiction, non-fiction essays, comedy and technical writing.
  • Teaching and presentation. I’ve taught undergraduate math courses, Clojure to a team of ten people, and a two-month course on Haskell.
  • Programming in languages including Python, C, Clojure, OCaml, Scala and Haskell. I know how to use statically-typed languages (e.g. Haskell, OCaml) to bring runtime bug rates near zero.
  • Concurrent and distributed programming.
  • Machine learning: not just how to use existing tools, but the mathematics and CS behind them.
  • Game design. I designed the card game Ambition and I’ve given courses on game theory’s applicability to high-frequency trading.
  • “Low level” (that is, very detailed and precise, enabling performance and control) programming.
  • Mathematics and statistics.
  • Architecture of high-reliability systems.
  • Company culture. What’s good, what’s bad, and how to fix it when things go off the rails.
  • How to market a company, job, or product to people at very high levels of talent.
  • The economics of software and of technology hiring.

I’m not going to claim to be an expert in all of those topics. In fact, I’m not sure that I’d call myself “an expert” in any of them. (Once you achieve what you once thought was expertise, you gain a respect for how much further there is to go.) I’ll say that I’ve had success in each thing that I’ve listed, and that there are people less expert than me, in every single one of those fields, earning hundreds (and, in some cases, thousands) of dollars per hour as consultants.

Let me indulge in the sin of honest self-perception: I’m very good at a lot of things. What I’ve become known for, though, is exposing unethical business practices in Silicon Valley. (I was good at that, too.) That was important work and I’m not ashamed of it, but I no longer wish to be known for it. I don’t want to be “the guy who does” that. I certainly don’t want people to think of me as “a tech blogger” rather than as a writer or a programmer. I’m ready for a new phase of my life.

Separating and streamlining

I’m a writer. I’ve put about 30 million words on the Internet, to this point. (This includes a lot of work– probably most of it– that was never published.) I’d guess that 22.5 million of those were junk. (I had a trolling habit for longer than I care to admit.) Now that I’m older, I’d like to have better efficiency.

If I can finish Farisa’s Courage before the end of 2016, that should mean more to me than whether I get another thousand Twitter followers.

No one doubts that I’m obnoxiously capable and that I work very hard. I don’t have to prove myself to anyone. My concerns have more to do with what I need to prove to myself. It’s really easy, if you spend too long in the corporate world, to lose the sense of yourself as a creative person. I’ve been working, for the past few months, on getting that back.

On the external front, there are two other things that I need to do.

First, I need to separate my lines of business. Whether I’m employed full-time or a consultant, my product is expertise and extreme competence. I need to cut a distinction between my core competencies and other, less marketable, expertise.

Mixing game design and creative fiction and Haskell and organizational dynamics into one torrent (in which, and this is completely my fault, the Silicon Valley bashing has been most dominant stream) is not very effective. I used to think that it made my blog “eclectic”. In fact, it makes my communication less effective. I need to break these sources apart and handle each one as best suits it.

Second, I need to streamline. It’s not that I have a “good” or “bad” reputation. I have this really complex, millions-of-words presence that just has too much surface area for one person to defend. I can’t manage all of that. It’s time to focus on clarity and quality, because size has become a liability rather than a show of strength (which, I’d argue, I no longer need to show).

What else?

I’ll keep writing essays, and probably even share a few, and I’m likely to bring back the best pieces (likely, edited for clarity and shortened) over the next few months.

Why am I eager to separate my domains of competence into separate products, to streamline my public image, and to move away from the “exposing Silicon Valley’s ethical failures” line of business? And why now? One thing that has occurred over the months is that I’ve been investigating a business opportunity. It’s bootstrappable, genuinely useful, and would require me to take a position of genuine leadership.

If that comes to pass, I’m going to have to pay more attention to what I put out into the world. What I say in the public will be yet another thing that, as I age, is no longer just about me. No one can cow me into silence (and many have tried) but I have to be more careful, on my own behalf, as I move into high-responsibility roles where everything I say will be taken a lot more seriously than I’m used to.

As I said, I’m shifting in approach to a focus on helping the good people in technology succeed, rather than trying to make the bad actors fail. In practice, it’s hard to work on both at the same time. I’m starting to think that I have to choose, and the choice is obvious.

That’s all I’ve got for now. Play Ambition and carry on.

Two chapters of Farisa’s Courage

After putting out the first chapter of “The Struggles”, a novella set in Silicon Valley, I’ve had a couple of requests come in about a more serious project that I had let a few people know about. I’m hesitant to share this, but… eh, what the fuck.

The extremely tentative title (as in, I haven’t come up with a better one yet) is Farisa’s Courage. (If you hate it, read and then suggest a better one.)

Like everything else, chapter numbers and ordering are very tentative. Below are what will probably be chapters 1 and 3.

 

More fiction, and 2016

After putting out the first chapter of “The Struggles”, a novella set in Silicon Valley, I’ve had a couple of requests come in about a more serious project that I had let a few people know about. I’m hesitant to share this, but… eh, what the fuck.

The extremely tentative title (as in, I haven’t come up with a better one yet) is Farisa’s Courage. (If you hate it, read and then suggest a better one.) The concept and main character came to me in 2013-14, and I’ve finally developed enough courage of my own to give writing this story (which is much harder to tell than a satirical one about Silicon Valley) a try. I’ve got about 120 pages “done” ( lthough that means so little in fiction because everything must be re-done several times before it is good) and I have a few chapters that are probably ready to be shared. Unlike my Silicon Valley novella, “The Struggles”, this projectis a lot closer to me, and it’ll probably take at least a couple of years. In truth, “The Struggles” is mostly a warmup round, to sharpen old tools. Farisa La’ewind’s story (whatever I end up calling it, in the end) is one that I have more emotional investment in telling right, because there’s a message in it (at least one) that’s worth getting out to the world. (That message probably isn’t obvious in the two chapters given here. Sorry about that.)

Like everything else, chapter numbers and ordering are very tentative. Below are what will probably be chapters 1 and 3.

I have no idea if any of this is any good. If it’s not, that just means the project will take more work and time. If it’s five years before I’m ready to write this work, then I’ll have to wait.

That brings me to 2016. I don’t like to talk about “resolutions” until I’ve actually achieved something toward them, but my goal for this year is to create. On the downfall of Silicon Valley and its ridiculous “unicorns”, I believe that “my team” is starting to win… and when the easy money goes out, so will the professional omerta that’s keeping a bunch of unethical founders’ and investors’ secrets wrapped, and a bunch of currently powerful people are going to have egg and worse on their faces. I doubt that I had all that much to do with it, but I played a role and I’m happy with that. I made it acceptable for the most talented people to admit, in the open, that Silicon Valley is not a meritocracy. I’ve helped to de-legitimize the VC-funded startup scene as anything other than money laundering for well-connected children of the existing corporate elite, and I’ve made some prominent people (hi, Paul Buchheit!) very angry in doing so. That’s good. It needs to be torn down. I’m confident now, however, that the process is running on its own momentum (regardless of whether or not I had much of anything to do with it) and that I can step aside and things will go just fine.

On the same token, I’m getting older. I’m 32 now. I’m don’t feel like I’ve changed much, physically speaking. If anything, I’m probably in better shape. Certainly, though, I’m more aware of my mortality. What comes with that is an increasing selectivity in how I spend my time. Tearing down rotting social edifices, like Silicon Valley, is noble work. I’m just not willing for it to be the only thing that I do. On my deathbed, I don’t want “Participant in 2012-17 Silicon Valley Teardown” to be my only accomplishment. Besides, while I’ve managed to block some of these people, especially the YC partners, from getting what they want (being loved) more than anything, I’m pretty sure that I’ve not made a dent in their financial well-being. They are still rich, and I am still not.

Programming is a very powerful creative skill. It gives a person orders of magnitude more ability at implementing her own ideas. That, I think, is what draws so many people (including myself) into it. This makes it such a hurtful, perverse irony that the tech industry has become what it now is: a corporatized, drama-ridden hellscape driven by petty feuds and pathological risk aversion (read: half-balled cowardice) in its leadership class. The zero-sum thinking that I encounter on Hacker News and Techcrunch is something that I take as a warning as to what I’ll become if I leave my heart in the startup industry for too long. I’m built to create, not to jockey for position in some macho-subordinate idiots’ game. Realistically speaking, I’ll probably be doing the latter for some time, because the corporate fucks have almost all of the money, but it’s not worth putting my heart into. Not at this age.

I don’t know where I’m going. What I do know, or think that I know, is that the reason for such widespread unhappiness in the U.S. and in the world is that we’ve deprived ourselves of creative process, which we’ve replaced with a constant search for approval and nonsensical “metrics”. It exists in personal life (see: social media) and the corporate world, and it all gets emptier every year. I don’t know how to solve it, and I’m thankful that I can say that I haven’t played much of a role in making the situation worse. At some point, we’ll all tire of this emptiness and get back to reading and writing and creating, one hopes. In any case, before I can solve this whole problem for the world, I need to solve it for myself.

Is it OK to enjoy the deaths of “unicorns”?

It will happen, within a year or few: the era of “unicorpses”. Startups currently valued at billions of dollars will collapse. Some will end entirely, while others will hit valuations of pennies on the dollar compared to their peak values. It’s coming. When? I honestly have no idea. It could happen later in 2016; it could be as late as 2020.

When it comes, I’ll enjoy it. I’m not ashamed of this. Yes, I generally believe, as a human being, that schadenfreude (joy in others’ misery) is a bad thing. I generally don’t wish for other people to fail. I wouldn’t laugh if I saw a stranger slip on the ice and fall. I’d help him up. That said, there are those who deserve to fail and, worse yet, there are those who must fail in order to make space for the good.

I came across this article about schadenfreude: Why Everyone Wants You To Fuck Up. The takeaway is, and I mean this with respect because it could just as easily apply to me: this guy has been in the tech industry for too long.

It’s worthwhile to distinguish several kinds of wanting someone to fail. For example, when George W. Bush became president, I was pretty sure that I didn’t like the guy, but I never found myself wishing, “Man, I hope he fucks up the country so bad that he’ll ruin our image and be judged a failure for fifty years to come”. I didn’t want him to fail at the job in a way that would hurt everyone (but, of course, he did). On the same token, I wish that he had failed at pushing through his brand of conservative values. The bad kind of wanting others to fail is when it dominates to such a degree that you’d be willing to make everyone lose in order to have them fail, or when you want them to fail because of who they are rather than what they are trying to do, when the two can be separated. (In terms of Silicon Valley personalities, they usually can’t be. If someone who beats his girlfriend is made a founder, it’s bad for the culture if he’s allowed to retain his executive position.)

For example, I want Snapchat to fail. I couldn’t care less about the product, but I hate what it says about us as a culture when an uncouth, sexist frat boy can be made into (yes, “made”, because his chickenhawking investors called the shots and are responsible for all of that) a billionaire while so many people struggle. I want merit to genuinely win, which means that Spiegel shall lose. Do I care if he’s reduced to poverty, as opposed to simply being taken out of view? I don’t. I don’t want him to have a miserable life. I just don’t want to live in a world where he’s visibly successful, because it’s unacceptably bad for the world’s values.

I’m not sure if “Silicon Valley”, the place and the culture, can survive “the Unicaust”. We might have twenty dark years after it. We might see another country, currently in obscurity, eclipse us at technology. I don’t know, so I can’t say. However, technology will come back (if it ever leaves, and it may not, since unicorns have zilch to do with true technology) as a force. I have my preferences, which involve its re-emergence as far away from Sand Hill Road as possible. That is, again, not because I have any personal hatred toward the venture capitalists who work there. I don’t even know them! But I hate the values that the current technology industry has embraced, and I look forward to seeing all of those beliefs refuted, brutally and with prejudice.

It’s necessary, before we can move forward, to wash out the founders (and, more importantly, the companies that they create) that believe “young people are just smarter“, or that open plan offices are “collaborative” instead of stifling, or that Agile Scrotums can compensate for an inability to hire top talent because of an awful culture. These people have to go into obscurity; they’re taking up too large a share of the attention and resources.

The technology industry is, of course, full of schadenfreude. One has to be careful about not falling into that mentality. We have a stupidly competitive culture, and we have an ageist culture which leads to people living with a perception of competition against everyone else. Among programmers especially, there is hard-core crab mentality, and it’s a big part of why we haven’t overcome Silicon Valley’s wage fixing, age discrimination, open-plan offices, and our lack of professional organization. If we beat each other down on Java versus Ruby, or on age (which is the stupidest source of division, like, ever) then we’re just making it easy for the slimy Damaso businessmen who’ve invaded our turf (and who run things) to divide us against each other.

However, it’s not schadenfreude to wish failure on that which is harmful. I don’t care if Evan Spiegel’s net worth, at the end of all of this, is $30 million or 17 cents or $5 million in the red. It doesn’t matter to me. He can retire with his millions and drink himself into a blissful stupor, and that’s fine with me; I don’t care. I do care about the simple fact that someone like him should never be held up as (or, as occurred on Sand Hill Road, produced into being) one of the most successful people of my generation. That’s the wrong thing for technology, for the country, and for the world. It’s not just decadent; it’s disgraceful.

We’ve been ignoring basic values and decency for too long. We’ve been allowing VCs to build companies with no ethics behind them, because they’re built to be sold or dead within five years. Here’s the thing: most of us who’ve spent time in and around the VC-funded technology industry know that it’s crooked to the core. We know that it’s in desperate need of reform and that if a few thousand executives’ jobs get vaporized in the process, that’s just fine. It’s hard to convince the rest of the world of the truth right now, though; the counter-refrain is, “It’s hard to argue with success.” I agree. It is very hard. This is why I’ll be elated when the bad guys’ success proves illusory and, at least, a large number of them collapse.

Mood disorders, cheating at Monopoly, a fundamental truth, and more on Agile Scrum

One of the common rules of ethics in Monopoly is not to hide money. While players don’t have to make great efforts to keep their cash holdings transparent, it’s not legal to stick a pair of $500 bills under the board and pretend to be nearly bankrupt in order to negotiate in bad faith, underpay when landing on the “Income Tax” square, or the like.

In the financial world, there are similar policies against “hiding money under the board”, or understating one’s performance in order to appear stronger in future years. If your trades make 50 percent in 2013, you might be inclined to report 15 percent so that, if you have a -10 percent year in 2014, you can report another positive year. Ultimately, there’s some level of performance that betrays high volatility and, if one exceeded that level, it would be socially advantageous to smooth out the highs and lows. In essence, one who does this is misleading investors about the risk level of one’s own chosen strategy.

While this behavior isn’t ethical when applied to financial instruments or board games, it’s something that many people have to do on a daily basis at the workplace. When you have a good week, do some extra useful work and hide it somewhere. When you have a bad week, unload some of your stored work to compensate for the period of slowness. Of course, this discussion naturally leads into the “what’s it really like to be bipolar in tech” question, which I’ve already answered and don’t care to trudge over (it sucks, people are shitty) that frozen muck path yet again. This “hiding money under the board” skill is something you learn quickly with a mood disorder, but I think that it’s worthwhile for everyone, because it’s impossible to predict when some external problem will impede one’s performance.

Broadly speaking, people can be divided into low- and high-variance categories. Society needs both in order to function. While the low-variance people aren’t as creative, we need them for the jobs that require extreme discipline and reliability, even after days without sleep or severe emotional trauma or a corporate catastrophe. High-variance people, we need to hit the creative high notes and solve problems that most people think are intractable. Now, about 98 percent of jobs can be done well-enough by either sort of person, insofar as few corporate jobs actually require the high-variance person’s level of creativity or the low-variance person’s reliability. Given this, it seems odd that whether one is low- or high-variance would have a powerful impact on one’s career (spoiler: it’s better to be low-variance). It does, because corporations create artificial scarcities as a way of testing and measuring people, and I’ll get to that.

There are also, I would say, two subcategories of the high-variance set, although the distinction here is blurrier, insofar as both patterns are seen in most people, so the distinction pertains to proportion. There’s correlated high variance and uncorrelated high variance. People with correlated high variance tend to react in similar ways to “normal”, low-variance people, but with more severity. Uncorrelated high variance tends to appear “random”. It’s probably correlated to something (if nothing else, the person’s neurochemistry) but it doesn’t have patterns than most people would discern. Oddly enough, while uncorrelated variance is more commonly associated with “mental illness”– if someone laughs at a funeral, you’re going to think that person’s “a bit off”– correlated variance can be much more detrimental, socially and industrially speaking. A person with correlated high variance is likely to nose-dive when conditions are ostensibly bad, and that’s when managerial types are on high alert for “attitude problems” and emerging morale crises and pushing for much higher performance (to detect “the weak”) than they’d demand in good conditions. “Hiding money under the table” is hiding variance, and uncorrelated variance is a lot easier to conceal because no one expects it to be there.

Most people would agree that between low- and high-variance there is a spectrum, but not necessarily connect it to anxiety and mood disorders like depression, panic, or bipolar disorder. I disagree. I think that depression and bipolarity are symptoms of many different root causes that we just haven’t figured out how to separate. “Depression” is probably ten different diseases grouped by a common symptom, which is part of what makes it hard to treat. Some depressions respond very well to medication and others don’t. Some go away with improved exercise and sleep habits and meditation, while others don’t. At any rate, I think that the extreme of high variance is going to manifest itself as a mood disorder. This also suggests that mentally healthy people at, say, the 90th percentile of variance might be arguably “subclinically bipolar”, even though they wouldn’t exhibit pathological symptoms. In fact, I don’t think that that’s as far off the mark as it sounds.

People have asked me what the hardest parts of cyclothymia (a rapid-cycling, but usually mild, variety of bipolar disorder) are, and it’s actually not the classic symptoms. Depression sucks, but I haven’t had a depressive episode of longer than a week since 2013 (after a death in the family) and I haven’t had a manic episode since 2008, and I’ll probably never have one again. Number two is the panic attacks, which tend to occur because, as one gets older, pure hypomania tends to be rarer and it’s more common to have “mixed” periods with characteristics of hypomania and depression intermingled. (And what do you get when you combine hypomania and depression? Often, anxiety.) That’s pretty much where I am now. I don’t go from manic to depressive and back; I get 47-49 weeks per year of normal mood (with some anxiety, and an occasional stray panic attack) and about 20-35 days of (mostly mild) cycling, during which I can work and get about just fine, but have (mostly short-lived) “depression attacks” and insomnia and weird dreams and the familiar pressure-behind-the-eyes headache of hypomania.

I said that panic is the #2 hardest, worst thing about it. What’s #1? I call it 20 percent time, as a shout-out to Google. Silicon Valley is full of self-diagnosed “Aspies” who think that they have an autism-spectrum condition, and I think that most of them are off the mark. Yes, there’s a discernible pattern in creative individuals: extreme social ineptitude from ages 5 to 20, awkwardness (improved social ability, but a deficit of experience) from 20 to 30, and relative social normalcy (with, perhaps, stray moments of bitterness) after 30. This is a ridiculously common narrative, but I really don’t buy that it has anything to do with the autism spectrum. People with autism are (through no fault of their own) socially deficient for their entire lives. They might learn to cope and adapt, but they don’t develop normal social abilities in their mid-20s, as people along the high-variance “main sequence” seem to do. In fact, I think that most of the people with this narrative are on a spectrum, but I don’t think it has anything to do with autism. My guess is that they have a subclinical 90th-percentile variety of what is, at the 98th- and 99th-percentile, bipolar disorder. One thing to keep in mind about mental illness is that its stigma is amplified by the visibility of the extreme cases. Below the water line on that metaphorical iceberg, there are a large number of people who aren’t especially dysfunctional and, I would argue, many undiagnosed and subclinical “sufferers” who experience little more than mild social impairment.

Mood disorders are notoriously hard to diagnose, much less treat, in childhood and adolescence, and people commonly offer quips like “teenagers are just manic-depressive in general”. That’s not really true at all. Teenagers aren’t “naturally bipolar”. What is true is that children and adolescents have exaggerated moods that exist to reward and punish behaviors according to their social results. This is a specific subtype of “high variance” that is correlated (unlike the more uncorrelated high variance that is seen in mood disorders). That’s how social skills are learned. You kiss a girl, and you feel great; you’re picked last for a team, and you feel terrible. In this period of life, I’d guess that the garden-variety, not-really-bipolar, high-variance (85th to 95th percentile) people don’t seem especially mood-disordered relative to their peers. But, at the same time, they’re spending 10 to 20 percent (and maybe more) of their time in a state of consciousness where their moods are somewhat uncorrelated to social results and, therefore, acquisition of social skills is halted. That state of consciousness is good for other things, like creative growth, but it’s not one where you’ll learn others’ social cues and messages, and how to move among people, at an optimal rate. This explanation, I think, is better than subclinical autism in getting to the root of why creative people are invariably socially deficient before age 25, and why a good number (more than half, I would guess) recover in adulthood. If you’re 40, the effect of “20% time” is that you’re “socially 32”, and no one can tell the difference. If you’re 20, and your “social age” is 16, that’s a fucking disaster (or, statistically more likely, a not-fucking disaster). The age of 25 is, approximately, the point at which being 10 to 20 percent younger in social ability is no longer a handicap.

What does this have to do with Silicon Valley and the workplace? I want to be really careful here, because while I think that high-variance people (and I, obviously, am one) experience across-the-board disadvantages, I don’t want to create tribal bullshit around it. High-variance people aren’t necessarily “better” than low-variance people. There are high-variance people with awful moral character and low-variance people with great moral character. It’s important to keep this in mind, even while “the Big Nasty” among the working world’s conflicts (and, probably, organizational conflicts in general) is usually going to come down to that between the high-variance people of strong moral character and the low-variance people of bad moral character. (Lawful good and chaotic evil are organizationally inert.) In the first set, you have the “chaotic good” archetype; in the movies, you always root for these people to win because, in real life, they almost never do. They’re usually not diagnosably bipolar, but they’re certainly not well-adjusted either. They’re courageous, moralistic, and intense. In the second set, of low-variance people with bad moral character, you have psychopaths. Psychopaths aren’t even affected by the normal sources of mood variance, like empathy, love, and moral conflict.

Psychopaths are the cancer cells of the human species. They are individually fit, at the expense of the organism. Now, there are plenty who misinterpret such claims and expect it to predict that all psychopaths would be successful, which we know not to be true. In fact, I’d guess that the average psychopath has am unhappy life. They’re not all billionaires, clearly, because there are only a handful of billionaires and there are a lot of psychopaths out there. Analogously, not all cancer cells are successful. Most die. (The “smell of cancer”, infamous to surgeons, is necrosis.) Cancer cells kill each other just as they kill healthy tissue. Cancer doesn’t require that all cancer cells thrive, but only that enough cells can adapt themselves to the organism (or the organism to themselves) that they can enhance their resource consumption, reach, and proliferation– causing disease to the whole. Worse yet, just as cancer can evade and repurpose the body’s immune system toward its own expansion, psychopaths are depressingly effective at using society’s immune systems (ethics, reputation, rules) toward their own ends.

What can the rest of humanity do to prevent the triumph of psychopaths and psychopathy? Honestly, I’m not sure. This is an arms race that has been going on for hundreds of thousands of years, and the other side has been winning for most of that time. Coming to mind (and bringing negative conclusions) is Ex Machina, a movie (which I’ll spoil, so skip to the end of this paragraph, if you don’t want that) that contends with some of the darker possibilities behind “Strong AI“. The three main characters are Caleb, a “straight man” programmer of about 25; Nathan, a tech billionaire with sociopathic tendencies; and an AI who goes “way beyond” the Turing Test and manages to convince Caleb that she has human emotions, even though he knows that she is a robot. She’s just that good of a game player. She even manages to outplay Nathan, the carbon-based douchebag who, remaining 5 percent normal human, can be exploited. Psychopaths, similarly, have a preternatural social fitness and are merciless at exploiting others missteps and weaknesses. How do we fight that? Can we fight it?

I don’t think that we can beat psychopaths in direct combat. Social combat is deeply ingrained in “the human organism”, and whatever causes psychopathy has had hundreds of thousands of years to evolve in that theater. As humans, we rank each other, we degrade our adversaries, and we form an ethical “immune system” of rules and policies and punishments that is almost always repurposed, over time, as an organ for the most unethical. Whatever we’ve been doing for thousands of years hasn’t really worked out that well for our best individuals. I think that our best bet, instead, is to make ourselves aware of what’s really going on. We can succeed if we live in truth, but this requires recognizing the lie. Is it possible to detect and defeat an individual psychopath? Sometimes, yes; sometimes, no. Beating all of them is impossible. If we understand psychopathy in terms of how it works and tends to play out, this might give us more of an ability to defend ourselves and the organizations that we create. We’ll never be able to detect every individual liar, but we can learn how to spot and discard, from our knowledge base, the lies themselves.

This takes us back to the variance spectrum. Every organization needs to rank people, and the way most organizations do it is what I call “the default pattern”: throw meaningless adversity and challenges at people, and see who fails out last. Organizational dysfunction can set in rapidly, and even small groups of people can become “political” (that is, trust-sparse and corrupt) in a matter of minutes. It doesn’t take long before the dysfunction and stupidity and needless complexity and recurring commitments exceed what some people can handle. There is, of course, a lot of luck that dictates who gets hit hardest by specific dysfunctions. On the whole, though, human organizations permit their own dysfunction because it allows them to use a blunt (and probably inaccurate) but highly decisive mechanism for ranking people and selecting leaders: whoever falls down last.

Let’s imagine a world where truck drivers make $400,000 per year. With a large number of contenders for those jobs, the entrenched and well-compensated drivers decide (in the interest of protecting their position) that only a certain type of person can drive a truck, so they create a hazing period in which apprentice drivers must tackle 72-hour shifts for the first three years. You’d have a lot of drug abuse and horrible accidents, but you would get a ranking. If you had them driving safe, 8-hour shifts, you might not, because most people can handle that workload. In the scenario above, you filter out “the weak” who are unable to safely drive extreme shifts, but you’re also selecting for the wrong thing. The focus on the decline curve (at the expense of public safety) ignores what should actually matter: can this driver operate safely under normal, sane conditions?

It isn’t intentional, but most organizations reach a state of degradation at which there are measurable performance differences simply because the dysfunction affects people to differing degrees. In an idyllic “Eden” state, performance could theoretically be measured (and leaders selected) based on meritocratic criteria like creative output and ethical reliability (which, unlike the superficial reliability that is measured by subjecting people to artificial stress, scarcity, and dysfunction, actually matters). However, none of those traits show themselves so visibly and so quickly as the differences between human decline curves at the extremes. This raises the question: should we measure people based on their decline curves? For the vast majority of jobs, I’d say “no”. The military has a need for ultra-low-variance people and has spent decades learning how to safety test for that (and, something the corporate world hasn’t managed, to keep a good number of the psychopaths out). But you don’t need an Army Ranger or a Navy SEAL to run a company. It probably won’t hurt, but most companies can be run by high-variance people and will do just fine, just as most creative fields can be practiced by low-variance people.

The advantage of psychopaths isn’t just that they tend to be low-variance individuals. If that were all of it, then organizations could fill their ranks with low-variance non-psychopaths and we’d be fine. It’d be annoying to be a high-variance person (and know that one would probably never be the CEO) but it wouldn’t make our most powerful organizations into the ethical clusterfucks that virtually all of them are. The psychopaths’ greater advantage is that they aren’t affected by dysfunction at all. When an organization fails and unethical behavior becomes the norm, the high-variance people tend to fall off completely while the decent low-variance people decline in lesser degrees– they’re disgusted as well; it just has less of an effect on their performance– but the psychopaths don’t drop at all. In fact, they’re energized, now that the world has come into their natural environment. If they’re smart enough to know how to do it (and most psychopaths aren’t, but those who are will dominate the corporate world) they’ll go a step further and drive the environment toward dysfunction (without being detected) so they can have an arena in which they naturally win. Social climbing and back stabbing and corporate corruption deplete most people, but those things energize the psychopath.

We come around, from this, to concrete manifestations of damaged environments. In particular, and a point of painful experience for software programmers, we have the violent transparency of the metrics-obsessed, “Agile Scrotum“, open-plan environment. This is the environment that requires programmers (especially the high-variance programmers who were attracted to the field in search of a creative outlet) to, breaking the rules of Monopoly and financial reporting, hide “money” under the table. Agile Scrotum and the mandates that come out of it (“don’t work on it if it’s not in the backlog; if you must do it, create a ticket and put it in the icebox”) demands people to allow visibility into their day-to-day fluctuations to a degree that is unnecessary, counter-productive, and downright discriminatory.

Agile Scrotum also hurts the company. It makes creative output (which rarely respects the boundaries of “sprints” or “iterations” or “bukkakes” or whatever they are calling it these days) impossible. I’ve written about the deleterious effects of this nonsense on organizations, but now I’m going to talk about their effects on people. When the new boss comes in and declares that all the workers must whip out their Agile Scrotums, for all the world to see and pluck at; the best strategy for a high-variance person (whose day-to-day fluctuations may be alarming to a manager, but whose average performance is often several multiples of what is minimally acceptable) is, in my mind, to hide volatility and put as much “money” under the table as one can. Achieve something in July when your mood and external conditions are favorable, and submit it in August when you hit a rough patch and need some tangible work to justify your time. Yes, it’s deceptive and will hinder you from having a healthy relationship with your boss, but if your boss is shoving his Agile Scrotum down your throat (wash out the taste with a nice helping of user stories and planning poker) you probably didn’t have a good relationship with him in the first place, so your best bet is to keep the job for as long as you can while you look for a Scrum-free job elsewhere.

I promised, in the title, a “fundamental truth” that would be useful to the “neurotypical” people with no trace of mood disorder, and to the low- and high-variance people alike, and now I’m at it. This steps aside from the transient issues of open-plan offices and the low status of engineers that they signify. It’s independent of the current state of affairs and the myriad dysfunctions of “Agile” management. It’s probably useful for everyone, and it’s this: never let people know how hard you work, and especially don’t let them know how you invest your time.

People tend to fall into professional adversity, as I’ve observed, not because their performance is low or high, but because of a sudden change in performance level. Oddly enough, going either way can be harmful. Sudden improvements in performance suggest ulterior motives, transfer risk, or a negative attitude that was held just recently, in the same way that a sudden improvement of health might upset a jealous partner. If you “ramp it up”, you’re likely to expose that you were underperforming in the past. Likewise, the people most likely to get fired are not long-term low performers, because organizations are remarkably effective at adapting to those, but high performers who drop to average or just-not-as-high performance. Most managers can’t tell who their high and low performers actually are, because their people are all working on different projects, but they can detect changes, especially in attitude and confidence, so you’re in a lot more danger as an 8 who drops to a 6 than as a 3 who’s always been a 3. This is, of course, one of the reasons why it’s so awful to be a high-variance person in a micromanagement-ridden field like software. As a strategic note, however, I think that it’s valuable for low-variance people as well to understand this, too. You don’t want to be seen as a slacker, but you don’t want people to see you as “a reliable hard worker” either. People with the “hard worker” reputation often get grunt work dropped on them, and can’t advance. What you really want, if you’re gunning for promotions, is for people to look at you and see what they value, which will not always be what you value. Some people value sacrifice, and you want them to see you as dedicated and diligent (and so hard-working and busy that you can’t be bothered to take on the kinds of sacrificial duties that they might otherwise want to foist upon you, in order to even out the pain load). Other people (and I put myself in this category) value efficiency and you want them to see you as smart, creative, and reliable on account of sustainable practices and personal balance.

Achieving the desired image, in which people see their own values reflected in you, isn’t an easy task. I could write books on that topic, and I might not even solve anything, because there’s a ton of stuff about it that I don’t know. (If I did, I might be richer and more successful and not writing this post.) I do know that control of one’s narrative is an important professional skill. How hard one actually works, and what one’s real priorities are, is information that one needs to keep close, whether one’s working 2 hours per day or 19 hours per day. (In my life, I’ve had many spells of both.) People who can control their own narratives and how they are perceived generally win, and people who can’t control their stories will generally have them defined by others– and that’s never good. One might sacrifice a bit of reputation in order to protect this sort of personal information, and I can already feel a bristling of that middle-class impulse to desire “a good reputation” (not that it stops me from writing). Here’s what I think, personally, on that. “Good” and “bad” reputations are transient and the distinction is sometimes not that meaningful, because what seems to matter in the long run (for happiness and confidence, if not always agreeability and of-the-moment mood) is control of one’s reputation. Even personally, that’s a big part of why I write. I’d rather have a “bad” reputation that I legitimately earned, because I wrote something controversial, than a “good” reputation that was defined for me by others.

What irks me about Silicon Valley’s culture and its emphasis on micromanagement is not only the meaningless (meaningless because what is made transparent has nothing to do with questions of who is actually doing the job well) violent transparency of open-plan offices and Agile Scrotums. That stuff sucks, but it bothers me a lot more when people seem not to mind it. It’s like being in a strategy game where the lousy players add so much noise that there’s no purpose in playing– but not being allowed to leave. For example, I’ve always argued that when a manager asks for an estimate on how long something should take, one should ask why the estimate is being requested. It’s not about a desire to put off planning or to hide bad work habits. It’s about parity. It’s about representing yourself as a social equal who ought to be respected. Managers may not understand Haskell or support vector machines, but they know how to work people, and if you give them that information for free— that is, you furnish the estimate without getting information about why the estimate is important, how it will be used, what is going on in the company, and how performance is actually evaluated– then they’re more likely to see you as a loser. It’s just how people work.

Likewise, if someone has day-by-day visibility into what you are working on, and if that person knows on a frequent basis how hard you are working (or even thinks that he knows), then you are being defeated by that person, even if your work ethic is admirable. Being visible from behind, literally and metaphorically, while working shows low status in the work and the person doing it. All of this is not to say that you shouldn’t sometimes share, on your terms, that you worked a 90-hour week to get something done. On the contrary, proof of investment is far more powerful than talk alone. At the same time, it should be you who decides how your narrative is presented, and not others. The passive transparency of an Agile shop– the willingness of these programmers to have their status lowered by a process in which they give far too much up and get nothing in return– makes that impossible. When you buy into Agile Scrotum fully, you’re also implicitly agreeing that contributions that aren’t formally recognized as tickets don’t matter, and allowing your work output to be negatively misrepresented, and possibly without your knowledge, by anyone with access to the tracking software. Isn’t that just putting a “Kick Me” sign on your own back?

I am, for what it’s worth, well-traveled. I’ve seen a lot of human psychology, and I’ve learned what I would consider a sizable amount. One recurring theme is that humans expect a certain logical monotonicity that doesn’t hold water. Logically, if A implies B and (A and C) is true, then B is true. In other words, having more information (C) doesn’t invalidate concluded truths. In the human world of beliefs and near-truths and imperfect information, it’s not that way at all. Of course, there are valid mathematical reasons for this. For example, it could be that B has a 99.99999% chance of being true when A is true and A “effectively implies” B, while C has an even stronger negative impact and implies not-B. Then A almost implies C but B definitely implies not-C. More commonly, there are computational reasons for non-monotonicity. “Flooding” a logical system with irrelevant facts can prevent valid inferences from ever being made, because computation time is wasted on fruitless branches, and flooding an imperfect logical system (like a human knowledge base) can even populate it with non-truths (in technical terms, this is called “full of shit”). The violent-transparency culture of open-plan offices and Agile is based on phony monotonicity. First, it assumes that more information about workers is always desirable. In human decision-making, more information isn’t always good. Shitty, meaningless information that creates biases will lead to bad decisions, and get the wrong people promoted, rewarded, punished and fired. Second, it ties into a monotonicity that specifically afflicts high-variance people (and advantages psychopaths), which is the perception that small offenses to social rules betray large deficiencies. That’s also not true. There’s really no connection between the meaningless unreliability of an office worker who, after a long night, shows up at 9:05 the next day; and the toxic ethical unreliability that we actually need to avoid.

I have, between the minor unpleasantness of a mood disorder and the major unpleasantness of the software industry, seen a lot of crap. I use the word crap specifically for its connotation of low quality, because let’s be honest about the problem of specifically low-quality information and what it does to humans in large amounts. Agile Scrotum generates a lot of information about who’s working on what and at what speed, and that information will re-order the team’s pecking order; and, guess what, it’s all crap. It takes the neurological variance that has to accompany creative high performance, because Nature or God couldn’t figure out any other way, and turns it into meaningless signals. The same holds for open-plan offices, which bombard engineers and management both with meaningless information (unless the detail of when each person goes to the bathroom is somehow important) that is then held to reflect on individual performance and value within the group. As again, it fills the office with piles of low-quality information and, soon enough, the only thing that anyone can smell is crap. This is one thing that mood disorders make a person better at than most people: detecting crap. When a malfunction of my own wetware tells me that everything I’m doing is pointless and that I should just crawl in a hole and die, I can listen to it, or I can say, “That’s crap” and move on. When an adverse situation throws me for a loop and I get anxiety, I can recognize it for what it is and get past it. (I may have a panic attack, but I don’t get the emotional drama that seems to afflict the neurotypical, because I recognize crap that comes from my own mind. If someone cuts me off in traffic and I still have anxiety or anger, five minutes later, that’s on me.) I’ve survived by recognizing flaws in my own cognition with a precision that 95 percent of people never have to develop. This also makes me preternaturally keen at spotting (less intense, but longer-lived) flawed cognition in others. In other words, it makes me great at cutting through crap.

So now I’m going to speak of (and possibly to) the software industry and try to get through a lot of crap. People learn to program (or to write, or to paint) in order to implement ideas. Sometimes we want to implement our ideas, and sometimes we want to implement good ideas regardless of whose they are. I think that it’s useful to have a healthy balance between the two: exploring your own ideas, and spending time with others’ ideas that (one hopes) have a higher-than-baseline chance of actually working. Many of us (myself included) were drawn into the craft for the creative potential. So, when I see a shop using Agile Scrum, the only question I can ask is, what the fuck happened? This macho-subordinate ceremony isn’t going to make me better at my job. It’s not going to teach me new things about programming and computation in the way that Haskell did, it’s not going to improve my architectural or aesthetic sense, and a bunch of psych-101 bullshit designed to trick me into liking being a subordinate is certainly not going to make me a better leader. None of it has any value to me, or to anyone, because it’s all crap.

People with my condition live 10 to 15 years less than an average person. That’s statistical and, as a non-drinker and non-drug-user who hits the gym every morning and hasn’t had a major problem with it for years, I’m pretty sure that I’ll beat the odds. I’m treated and as “sane” as anyone else, but I also have the (extremely low probability, high impact) sword of a recurrence of 200x Crazy hanging over my head. I’m aware of my mortality. Even a single minute spent learning how to write fucking user stories is a minute not spent learning something that actually matters. Or relaxing and working on my health. Or at the gym. Or sleeping. Or writing. I factually don’t have the time to learn and practice that garbage, but given that I’m not really any more or less mortal than anyone else, I can’t really justify the idea that anyone else should have to do it, either. If they’re really such poor programmers that the humiliation of Agile Scrotum “makes sense” for them, then why not convince them to work on the skills that they lack, instead? It’s crap and, rather than holding on to it, we should throw it away. Our industry is already too cluttered. Even without the clutter, there would be more things genuinely worth working on than there is time and talent to do them. With the clutter, it’s hard enough to get started on just one. We need to get serious about ourselves, our relationship to the world, and computer science and technology themselves. This industry is full of people waxing philosophical about “the Singularity” and biological immortality, but are we really ready to talk about immortality if some of us think that it’s okay to spend time on “backlog grooming” meetings?

That, like much that I have written, took balls to finish. By this, I mean metaphorical “balls” because not only do male sex organs fail to be a prerequisite for courage, but they’re not even a correlate of it, in my experience. Anyway… wanna know what it didn’t require? It didn’t take an Agile Scrotum.

Big picture first

Why are the ethics in the software industry so bad? Why do people like Evan Spiegel get made– and, make no mistake, most of them are produced by their backers, merit having nothing to do with it– into billionaires? And why are the products made by the software industry so often of low quality? Why do we, despite practicing a craft that takes 10-20 years to get any good at, tolerate a culture of age discrimination? These are often treated as separate questions because, as engineers and problem solvers, that’s something we like to do: take problems apart and solve them separately. In mathematics, that’s a powerful approach, because a proof is, in practice, a derivation of an often not-obvious result through a chain of simple and evidently true inferences. In human problems, this approach often falls short, because the issues are so interconnected that one problem, typically, can’t be solved in isolation from all the others.

For example, I could write 100,000 words about why open-plan offices, to take one problem in our industry, are bad. They damage productivity, they’re bad for worker health, they encourage rotten cultures, they increase the incidence of sexual harassment, and they can trigger or worsen anxiety disorders, even in top performers with “nothing to hide”. Worse yet, I can illustrate why they are unlikely to go away any time soon. It’s not about these offices being cheaper (they are, but not by enough to justify a productivity loss that is orders of magnitude larger than what is saved) but about maintaining a certain image, even at the expense of health and productivity. In particular, it’s more important that the VC-funded startups look productive to their investors than for them to actually be productive, given the high likelihood that any product that is built will be scrapped when the company is acquired. (The non-VC-funded companies are following suit on this horrible trend, but the cultural pace here is set by that cesspool called “Silicon Valley”.) An open-plan programmer isn’t hired for his coding ability, which is rendered irrelevant by the awful environment, but to be a stylish piece of office furniture. Is it any surprise, then, that we’d also have an ageism problem and a culture of sloppy craftsmanship? Ultimately, though, people know that open-plan offices are bad. The 100,000 words I could spill on the topic wouldn’t make a difference. We don’t need to persuade people or speak “truth to power”, because those in power already know what the truth is. We’ll probably have to fight them.

Google’s motto, “Don’t Be Evil”, comes to mind. Of course, it’s hilarious that this would be a corporate motto of a company that uses stack-ranking to disguise as “performance”-based firings what other companies would own up to as business-based layoffs, and that the origin of this slogan would be Paul Buchheit himself. Never mind that, though, because I think that it’s actually a great slogan for a company like Google, and here’s why. It’s probably the most ineffective three words in the English language. When you’re staring down evil, you can’t persuade it not to be what it is. Telling evil not to be evil is like telling a tire fire to stop polluting or a murderer that what he’s about to do is a bad thing. It won’t work. Likewise, fifty more essays on the harm to health and product inflicted by open-plan offices won’t make a difference in a world where founders are convinced that their programmers are more useful as office ornaments than for their problem-solving and coding ability.

So why is the VC-funded startup ecosystem so ugly? Is it just that there’s a lot of money in it? This I doubt, because there’s a lot more money in finance, where the ethics are (quite honestly) better. I think the answer is that, at the “big picture” level, it’s impossible to separate what we do from how we do it. I used to think otherwise. I once believed that the macroscopic social value of the work and the micro-level decency around how people do the work were orthogonal. To the credit of that view, there are many organizations with obvious positive macroscopic value but awful behavior internally. (Don’t get me started on the non-profit sector and, especially, the way non-profit workers are treated by their managers.) The False Poverty Effect (of powerful people behaving unethically because their low incomes, relative to their peer group, leave them feeling martyred and entitled) is as pronounced in non-profits as in software startups. All of this said, I’m increasingly concluding that positive macroscopics are a necessary (if not sufficient) condition for a culture of decency. In other words, we can’t fully understand and oppose organizational rot without concern to the specific matter of what that organization does. Then, let’s talk about macroscopics. What exactly do we, in the software industry, do? Most of us, to be blunt, haven’t a clue what our effect on society as a whole really is.

What can we do? Quite honestly, the answer is “anything”, because it’s not software that distinguishes us but abstract problem-solving ability. (It’s important to keep this in mind, and not get into tribal scuffles with hardware engineers, academics, research scientists, and other high-power allies that we’ll need in order to overthrow our enemies and establish a true intellectual meritocracy.) Software is one mechanism that we use often because, right now, it works very well, but software itself isn’t what’s interesting to me. Solving problems and implementing new ideas is what’s interesting. At any rate, I’ve never worked in a company where I couldn’t do the executives’ jobs, often better than the people holding those positions, but it’s rare that I’ve met a business executive who could do my job. That’s the sociological reason why software engineers “have to” be kept down with processes like Scrum, open-plan offices, and an age discrimination culture that shunts them away once they’re halfway good at what they’re doing. The smartest subsector (the “cognitive 1 percent”) of the subordinate caste, whether we’re talking about literate Greek slaves in ancient Rome or software programmers today, has always been a problem for the patricians and owners. Okay, so let’s narrow the question. What can we do with software? Even to that, the answer is a pretty large set of things. Now, what are we doing with software? Curing cancer? Making cars safer or houses more energy-efficient? Nope. Our job, almost invariably, is to enable businessmen to unemploy people. That’s what we do, and that’s why these VCs and founders pay us.

I’m not going to argue that cost cutting is inherently “evil”, because it’s not. Most of the jobs that have disappeared in the past 40 years are unpleasant ones that few people would really want. The problem isn’t that those jobs are gone, but that the people displaced haven’t been given the opportunity or the resources necessary to train up into better roles. The problem with cost cutting is that the people who have both the intelligence to do it right (and genuinely improve efficiency, rather than just externalizing costs to someone else) and the moral decency to make sure that the returns are distributed equitably, including to the people who are displaced. For every person who has the skill to cut costs in a way that has everyone winning, there seem to be 199 who are just moving costs around and hurting the people who don’t have the power to fight back, and it’s those 199 other ones to whom most programmers answer.

Cost cutting, in my view, is only valuable in the context of it being a necessary prerequisite for making newer and better things. Doing the same thing more cheaply isn’t intrinsically useful, unless the resources freed up are spent in a beneficial way. Moreover, often the cutting of economic costs is a minor side benefit achieved in the course of what actually matters: cutting complexity. Complexity takes a number of forms, most undocumented: power relationships, processes and ceremonies, expectations and recurring commitments. Most of these are difficult to measure in the short term. Unfortunately, the inept cost cutters out there tend to be cost externalizers who increase the total complexity of the system they inhabit. For example, a technology startup might decide to hide a layoff by instead imposing stack-ranking and removing “low performers”. It sounds like a great idea to everyone, insofar as everyone has their own opinions of who those low performers are. However, the stack ranking and the machinery around it– the long “calibration” sessions in which managers horse-trade and play elaborate games against each other with their reports’ careers– creates incentives for political behavior and makes the company more complex (and in a uniformly and objectively undesirable way) while not cutting costs in any meaningful way. In the long term, complexity increases while resources decline and the result is a brittle system, prone to catastrophic failure.

So what is evil, in the context of our moral responsibility as technologists? And what is good? There’s a very simple and, in my view, correct answer. Technology has a simple job: to eliminate scarcity. That’s our moral calling. It’s not just to “write good code” or make processes more efficient. It’s to solve problems and chip away, almost always incrementally, at the face-eating demons of economic scarcity and class subordination. Otherwise, all this work that we do in order to understand computers and how to solve difficult technical problems would just be pointless mental masturbation.

Of course, technology can be used toward opposite effects: to create new scarcities and enforce class subordination in ways that weren’t possible, decades ago. When I Google myself, it’s not hard to come upon the various events that have occurred in my life specifically because certain entrenched interests now see me as a threat, and have attempted to destroy my reputation. The social and technical changes brought about by the Internet are mostly good, for sure, but the bad guys know how to use these tools as well.

We need to disavow ourselves of the notion that we can code or “merit” our way out of this. If we continue to allow our efforts to be directed and deployed by a ruling class with bad intentions, we’ll continue to suffer the consequences on the micro level as well. They’re not going to treat us better than they treat the rest of society; it just doesn’t work that way. Consequently, if we want to get ourselves rid of the open-plan offices and the “Agile Scrotum” micromanagement, we can’t just focus on the micro-scale battles about programming languages and source code style. It’s not that those issues aren’t important to how we do our work; they are. That said, the one thing that becomes increasingly clear as I get older is how interconnected everything is. We can’t expect “a good culture” in an industry so willing to deploy our talents toward the creation of artificial scarcities. If we work for evil, our work will be evil and we will experience evil for every day that we do so.

Of programmers and scrubs

I’m currently working through the book, Playing to Win, by David Sirlin. It’s excellent. I’ll probably buy a copy, and I’d recommend this book even to people who aren’t especially interested in competitive gaming, because it’s not really about the (tabletop and video) games themselves so much as the cultures that they create, and the lessons from those cultures are more generally applicable to human organizations.

One of the archetypes that Sirlin describes is the Scrub, which is a player who insists on rules that aren’t in the game, and views certain effective strategies as “cheap”. Scrubs aren’t always unskilled players, but they’re rarely the most effective ones, because they insist that only certain types of wins are acceptable. This prevents them from exploring all possible strategies, and it also leads them to become unduly emotional when they face opponents using strategies that they view as “cheap”.

Sirlin doesn’t advocate cheating or bad sportsmanship, but he argues that competitive games are best played to win, and not to other effects that, at high levels, will make one unable to compete. That is, your goal in a game of Magic: the Gathering isn’t to build up a 23/23 creature with enchantments or to avoid taking any damage or necessarily to launch a 30-point Fireball. It’s to bring your opponent to a losing condition (0 life or no cards to draw) while preventing her from doing the same to you. That’s it. You do not cheat, and you should not be a poor sport, but you play the rules as they are, not as they might be in some idealization of the game (that is, quite often, actually an inferior game).

I’ll confess it: I used to be a Magic scrub. When I was a teenager, I loved creatures and fast mana and beefy monsters (Force of Nature! Scaled Wurm! Shivan Dragon!), and this was in the 1990s, when creatures were so under-powered that top tournament decks used few or none. I did all sorts of flashy scrub stuff like use Channel to bring out a second-turn Scaled Wurm, which is terrible because you’re giving up 7 life for something that can be killed with two black mana. I did have a blue deck (blue, with its manipulative focus and being the most “meta” of the colors, was the most powerful color at the time, and probably is still) but I rarely played it. I was all about the green and red: 10-point fireballs, 8/8 tramplers, and so on. I viewed reactive strategies, such as counterspell decks, land destruction decks, and hand destruction decks, as “cheap” and borderline unethical. (Stop fucking with my cool shit! Build up your own cool shit!) I lost frequently to better deck-constructors using “boring” strategies and it always made me angry.

I was in the wrong, of course. Land destruction (i.e. resource denial, by which I mean rendering your opponent unable to cast spells by demolishing the most important resources) is a perfectly acceptable way to win the game. Think it’s broken? Then find a way to defend against it, or build your own deck and exploit it. There are some things not to like about Magic (well, only one big one: the tie between in-game capability and real-life finances) but the existence of reactive decks isn’t one of them.

What makes a scrub? In my view, scrubs often want to play multiplayer solitaire. They want to build beautiful edifices, and not have interference from the opponent, except perhaps at the grand end of the game. They want a game of skill (as they define it) rather than a messier game of strategic interactions. They aren’t much for the competition aspect of the game; they want to build a 13-foot-tower and have their opponents come up to 12 feet, not face an opponent who (legally according to the game, and therefore ethically) reduces theirs to zero and builds a 1-foot tower. Strategies they find unacceptable infuriate them, and they’ll often complain about being beaten by “weaker” players. If they get left out of Monopoly negotiations and are too late to get their share of the game’s 12 hotels, they get very angry and accuse the other players of having “ganged up on” them, when that’s almost never what actually happened. What usually happened is that the scrub failed to play the negotiatory game that is far more important than the roll-and-move dynamic that sets that negotiation’s initial conditions.

The scrub, at his core, takes a simplified version of the game, embedded within the actual game that is described by the roles, and declares it better. To be clear about it, sometimes (but rarely) the scrub is right. Some games have such bad designs that they fail on their own terms. However, many more games (e.g. Monopoly) succeed on their own terms but might not deliver the kind or quality of play that is desired. That doesn’t mean that they’re awful games, but only that they fall short of a specific aesthetic. It’d be hard to say that Monopoly is a terrible game, given its success and longevity, but it doesn’t satisfy the “German style” aesthetic to which most tabletop gamers (including myself, to a large degree) subscribe. All of this said, in most cases, the “scrubbed” version of the game, in which otherwise legal moves are banned, is less interesting than the original one.

So what do scrubs have to do with programming and the workplace? Well, the first question is: what is the game? The answer is perhaps not a satisfying one: it depends. The objective might be:

  • to write a program that solves a problem using as little CPU time as possible.
  • to write a program that solves a problem, and with maintainable source code.
  • to produce a solution quickly in order to fend off political adversity.
  • to choose a problem that is worth solving, and solvable, then solve it.

It’s rarely spelled out what “winning” is, but there are truths that make programmers unhappy. Foremost is the fact that good code is only a small factor in determining which program or programmer wins. Most of us live in the real world and have to play the coarse, uncouth, political game in which the objectives don’t have much to do with code quality. The ultimate goal is usually (for an individual or a firm) to make money, making the proximate goal success in the workplace. And when we get trounced by “inferior” players who write shitty code or who can’t even code at all, it makes us angry.

Now, I will argue that, from a game-design perspective, the modern workplace game is poorly designed. One notable and common flaw in a game is a king-maker scenario, in which players incapable of winning can, nonetheless, choose who the winner is. (The reason why many house varieties of Monopoly forbid sale of properties below on-board price: to prevent losing players from selling everything at $1 to the person who angered them least.) Whether this is a flaw or an additional social element to play depends on opinion, but it’s generally considered to be a sign of poor design. The workplace game is one where the inferior players have a major, game-killing effect, and those who dominate it are those who are most skilled at manipulating and weaponizing the inferior players. This is an ugly process, and when thousands to millions of dollars and health insurance are at stake, people have a justified dislike of it. The game design is shitty and the game is not fun for most people, and the people who find it fun tend to be the worst sorts of people… but these are technical notes. The sin of the scrub is failing to recognize the rules as they are rather than as one thinks they should be. In this way, most of us are scrubs. This often blinds us to what is happening. When we get beaten by “inferior” programmers (“enterprise Java” programmers and management aspirants who spend ten times as much time on politics as on coding) we’re nonetheless getting beaten by superior game-players. We out-skill them in the part of the game that we wish mattered most– writing high-quality software– but they’re superior at turning something that we view as an ugly, disrupting landscape feature– the hordes of stupid, emotional assholes out there who utter phrases like “synergize our timeboxed priorities” and use “deliver” intransitively in between grunts and slobbering– into a tailwind for them and a headwind for us.

I catch myself in this, quite often. Let’s take programming languages. Java and C++ are ugly, uninspiring, depressing languages in which, not only does code take a long time to write, but the code is often so verbose that it’s usually unmaintainable. I would like to be able to say that the person who copy-pastes his way to a 20,000-line Java solution, as opposed to my elegant 400-line Haskell program, is “cheating”. For sure, I can make a case that my solution is better: easier to maintain, more likely to scale, less prone to undesirable entanglements and undocumented dependencies. On the other hand, to say that the “enterprise Java guy” is unethical (and, yes, I’ve heard language warriors make those kinds of claims) is to let my Inner Scrub talk. It’s possible that my beautiful Haskell program is an 8/8, trampling Force of Nature and the Java program is a Stone Rain (land destruction). Well, sometimes land destruction wins. Does it “deserve” to, in spite of my insistence that it’s an “ugly” way to win? Well, my opinion on that doesn’t matter. The rules of the game are what they are, and as long as people don’t cheat, what right do I have to be angry when others use different strategies?

Of course, some might find my depiction of office politics as “a game” to be sociopathic. The stakes are much higher, and people get killed by software failures, so it seems evident that the anti-scrub “if the rules don’t say it’s illegal, you can do it” attitude is cavalier at best and reckless at worst. I don’t have an easy way to address this, and here’s why. Not all things in life are, or should be, competitive games. The majority of our society’s most important institutions are economic game-players. Is that morally right? I don’t know. I don’t know how to make it something different without risk of making it worse. It’s also unclear what “the rules” are and how we define cheating. Is lying on one’s CV simply another strategy, with attendant risks and penalties, or should people who do it get out-of-game punishments, like card cheaters do? I think most would agree that quack doctors deserve jail time, while those who harmlessly upgrade titles to improve social status (e.g. Director to Senior Director, not Associate to CEO, the latter being genuine dishonesty) or massage dates (to cover gaps) probably don’t even deserve to be fired. It gets murky and complicated, because business ethics are so poorly defined and especially because the most successful players tend to be those who bend the rules beyond form. We can argue “outside of the system” that many of these people are unethical, and therefore indict the system. We can say that a game like office politics doesn’t deserve to have its outsized impact on a person’s income and professional credibility– and I, for one, would strongly agree with that. What we can’t do is deny that office politics is a game and (to rip off The Wire) “the game is the game”.

One common flash-point of scrub rage is the “performance” review. Politically unsuccessful high performers often get negative reviews, to their chagrin. Perhaps this is because they’re used to school, which is far more meritocratic than the office. I’ve definitely gotten poor grades from teachers who liked me and good grades from those who didn’t, because academic grades are mostly based on actual performance. Young professionals often expect this to continue into adulthood, and become furious when their “performance” reviews reflect their political standing rather than what they actually did. (“You can say you don’t like me, you can even fire me, but don’t you fucking dare call me a low performer!”) They take it as a hit to their pride and an injustice rather than as what the performance review actually is: a feature of the game that was used against them. Workplace scrubs, to their credit, tend to be the best workers. They “perform” very well and expect “the meritocracy” to declare them the winners, and when the prizes are given to these other people who often “performed” poorly but were superior at manipulating others, they get angry.

Programmers, often, are the worst kinds of scrubs. Like Magic scrubs who prefer to play multiplayer solitaire (look at my 8/8 trampling Force of Nature!) over directly competing (and facing an opponent who directly competes with them, by countering key spells and denying resources) they often just want to be left alone and kept away from “politics”. They want to play multiplayer solitaire, not deal with adversity. This is understandable, but the adversity doesn’t go away.

One example of this is in the extreme professional omerta that afflicts the tech industry. Programmers are terrified of speaking up about bad practices by employers, and not for lack of a reason. Those who have blown whistles often find it harder to get jobs. But who enforces the black lists? Sadly, it’s often other programmers: the Uncle Toms who call the whistleblowers “not team players” or “rabble rousers”. And why? Employers and employees aren’t always in adversity, but often they are (and it’s usually the employer’s fault, but that’s another debate). To many, this truth is deeply unpleasant. The multiplayer solitaire players hate being reminded of it. They hate being told that they’d make three times as much as Java programmers (because they could change jobs every year and create bidding wars) than writing objectively better code in Haskell (where the community is small and changing jobs every 12-18 months isn’t an option). They hate being told that the quality of projects they get has more to do with political favor than any definition of “merit”, and that they ultimately work for people who are about as fit to evaluate our work as I am to perform surgery. They want to ignore all of the unpleasantness and focus on “the work” as they’ve defined it, and the game under the rules that they wish were the actual rules. When others blow whistles, they’re angered by the interruption and grab the wrenches that they’ll beat the messengers with.

Finally, we can’t cover this topic without discussing collective bargaining. Scrubs don’t enjoy losing, and they’re not always inept players, but the motto of the scrub is, “I don’t want to win that way.” Of course, if “that way” is cheating, then this is the only morally correct position; but often, “that way” includes strategies that are well within the rules. As a teenage Magic player, I didn’t want win through land destruction or counterspells, so I deemed that to be an “unacceptable” way to win (and consequently lost against players who knew what they were doing). Collective bargaining admits that the relationship between employers and employees is often adversarial and almost always at risk of becoming such. Since the work is viewed by those outside of our craft as a commodity, it mandates that commoditization happen in a way that is fair to all parties. Further, it renders us more resistant to divide-and-conquer tactics. However, a great number of Silicon Valley programmers bristle at the thought of creating any organization for ourselves whatsoever. Their response is, in essence, “I don’t want to win that way.”

I, too, would rather win in the classical “clean” way, which is by becoming a great programmer and writing excellent code that solves important problems. I’d love to live in a world in which employers and management had the best interests of everyone at heart. Unfortunately, we don’t live in a world where any of that is true. In the world that actually exists, with the rules as they actually are, we are losing. Technology is objectively run by the wrong people. We render obscene amounts of value to a class of overpaid, under-skilled ingrates that treats us as less than human, and our aversion to “getting political” keeps us from doing anything about it. Our aversion to what we view as a less-glorious path to victory leads us to a completely inglorious loss.

If I were to write a New Year’s Resolution for the 20 million programmers out there, it would be this: let’s stop being scrubs, and let’s start actually winning. Once we’re on top, our opinions about beautiful code and technical culture will start to matter, and we can fix the world then.

Things that failed: the individual mandate for health insurance

The Affordable Care Act (“Obamacare”) quite possibly did more good than harm, and it was enacted with good intentions, but it hasn’t caused health insurance premiums to decline. Instead, they’re going up far faster than inflation. This probably isn’t surprising anyone: the Satanic Trinity (healthcare, housing, and tuition expenses) have been an exponentially-growing choke pear in middle-class orifices for decades. However, it wasn’t supposed to happen.

It’s time to point out a failure: the individual mandate, or the requirement that everyone purchase health insurance. I predicted that it would fail. The problem with health insurance isn’t just one thing– a litigious society, lobbying and corruption and corporate greed, poor coverage, an artificially low supply of doctors due to AMA chicanery, overspending by drug companies and (increasingly) hospitals on marketing– but an “all of the above” problem. Shit’s complicated, and a legal requirement that people buy a shitty product doesn’t, all else being equal, make the product less shitty. It makes it more shitty and far more expensive. Add a $1,000 per year penalty for being uninsured and, guess what, health insurance premiums are going to increase by $1,000. This is what happens in practice, even though the theory says otherwise. According to theory, individual mandates bring premiums down by removing adverse selection from the insurance pool. Without an individual mandate, the young and healthy opt out (especially with the ridiculous premiums that exist now) and the sick buy insurance. This makes premiums high. The goal of an individual mandate is to make the insurance pool national and, because healthy people expected to have low health costs are now included, reduce the overall premium.

Despite the high premiums, health insurers offer shitty products. The “cover everything” plans are gone, and people who get seriously sick are going to be paying a large percentage of the costs out of pocket. Why does this exist? The truth about our medical system is that it has a strong similarity to American and European witch hunts in the 15th to 18th centuries. The primary motivation for witch hunting was economics: if a person (usually an elderly woman) was judged to be a witch, the property was forfeit and disbursed between the clergy and the successful prosecutor. Who tended to have considerable stored wealth without the strength or power to defend it? Old people, often women, who lived alone. So who were the favored targets of witch hunters? That same set of people. Witch hunting no longer exists, but that economic source (middle-class retirees who’ve amassed nest eggs in the hundreds of thousands of dollars) remains and there are plenty who wish to get at it. The least respectable elder-poachers are the telemarketers and matchstick men who prey on the lonely; more respectable, but targeted toward the same end, is our “healthcare-industrial complex”: the hospitals, insurance bureaucrats, and lobbyists who use “medical billing” as an excuse to get at that last $250,000 from a person who (often being at death’s door) is completely unable to defend it (and before the children are aware that it exists and that their inheritance is being swiped).

The concept of an individual mandate is telling, though. It shows that many young people choose not to buy health insurance. And why? It’s not just that it’s a shitty product, but indicative of something else. Medical bankruptcies have become a “yeah, that happens” phenomenon, but this is indicative of something more severe that has happened in the past 20 years: we’ve de-moralized personal finance. By this, I mean that it’s no longer embarrassing to be in debt, and that it’s no longer viewed as shameful or unethical when people take on debt that they can’t possibly repay, or take on debt and have no intent on paying it back. I actually see this as a severe long-term threat to the fabric of society. Don’t get me wrong: I’m glad that there are forgiving bankruptcy laws in order to give second (and third) chances to people who fuck up… and business bankruptcy is a different affair altogether, since good-faith business failure is fairly common. What I find disturbing is that the system has become so unfair and capricious– people start out with large “student” debts because of the protection racket run by organizations that have given themselves airtight access to the middle-class job market, and can end up with unpayable medical bills for the “crime” of having bodies in which cells occasionally divide the wrong way– that a large number of people under 35 no longer see financial failure as a failure to keep their word, but as mere bad luck. That’s not because there is anything wrong with them as people- there isn’t– but because it accurately reflect the world that previous generations have left for them. The Boomers have created such an anti-meritocracy that this sort of de-moralization of personal finance, as brutal as its effects on the economy (see: 2008) may be, makes sense.

There actually is one way to solve the adverse-selection problem in the health insurance pool, and a host of other issues as well. The right way to impose an individual mandate is to (a) tax people and (b) give them “free” healthcare that is paid-for by taxes. I prefer the single-payer strategy, but a public-option that competes these shit-house private insurers into oblivion would also work. It may have the strictly rhetorical weakness of being equated to “socialism”, but it has what, at least, deserves to be an advantage in debate: it would actually work.