More fiction, and 2016

After putting out the first chapter of “The Struggles”, a novella set in Silicon Valley, I’ve had a couple of requests come in about a more serious project that I had let a few people know about. I’m hesitant to share this, but… eh, what the fuck.

The extremely tentative title (as in, I haven’t come up with a better one yet) is Farisa’s Courage. (If you hate it, read and then suggest a better one.) The concept and main character came to me in 2013-14, and I’ve finally developed enough courage of my own to give writing this story (which is much harder to tell than a satirical one about Silicon Valley) a try. I’ve got about 120 pages “done” ( lthough that means so little in fiction because everything must be re-done several times before it is good) and I have a few chapters that are probably ready to be shared. Unlike my Silicon Valley novella, “The Struggles”, this projectis a lot closer to me, and it’ll probably take at least a couple of years. In truth, “The Struggles” is mostly a warmup round, to sharpen old tools. Farisa La’ewind’s story (whatever I end up calling it, in the end) is one that I have more emotional investment in telling right, because there’s a message in it (at least one) that’s worth getting out to the world. (That message probably isn’t obvious in the two chapters given here. Sorry about that.)

Like everything else, chapter numbers and ordering are very tentative. Below are what will probably be chapters 1 and 3.

I have no idea if any of this is any good. If it’s not, that just means the project will take more work and time. If it’s five years before I’m ready to write this work, then I’ll have to wait.

That brings me to 2016. I don’t like to talk about “resolutions” until I’ve actually achieved something toward them, but my goal for this year is to create. On the downfall of Silicon Valley and its ridiculous “unicorns”, I believe that “my team” is starting to win… and when the easy money goes out, so will the professional omerta that’s keeping a bunch of unethical founders’ and investors’ secrets wrapped, and a bunch of currently powerful people are going to have egg and worse on their faces. I doubt that I had all that much to do with it, but I played a role and I’m happy with that. I made it acceptable for the most talented people to admit, in the open, that Silicon Valley is not a meritocracy. I’ve helped to de-legitimize the VC-funded startup scene as anything other than money laundering for well-connected children of the existing corporate elite, and I’ve made some prominent people (hi, Paul Buchheit!) very angry in doing so. That’s good. It needs to be torn down. I’m confident now, however, that the process is running on its own momentum (regardless of whether or not I had much of anything to do with it) and that I can step aside and things will go just fine.

On the same token, I’m getting older. I’m 32 now. I’m don’t feel like I’ve changed much, physically speaking. If anything, I’m probably in better shape. Certainly, though, I’m more aware of my mortality. What comes with that is an increasing selectivity in how I spend my time. Tearing down rotting social edifices, like Silicon Valley, is noble work. I’m just not willing for it to be the only thing that I do. On my deathbed, I don’t want “Participant in 2012-17 Silicon Valley Teardown” to be my only accomplishment. Besides, while I’ve managed to block some of these people, especially the YC partners, from getting what they want (being loved) more than anything, I’m pretty sure that I’ve not made a dent in their financial well-being. They are still rich, and I am still not.

Programming is a very powerful creative skill. It gives a person orders of magnitude more ability at implementing her own ideas. That, I think, is what draws so many people (including myself) into it. This makes it such a hurtful, perverse irony that the tech industry has become what it now is: a corporatized, drama-ridden hellscape driven by petty feuds and pathological risk aversion (read: half-balled cowardice) in its leadership class. The zero-sum thinking that I encounter on Hacker News and Techcrunch is something that I take as a warning as to what I’ll become if I leave my heart in the startup industry for too long. I’m built to create, not to jockey for position in some macho-subordinate idiots’ game. Realistically speaking, I’ll probably be doing the latter for some time, because the corporate fucks have almost all of the money, but it’s not worth putting my heart into. Not at this age.

I don’t know where I’m going. What I do know, or think that I know, is that the reason for such widespread unhappiness in the U.S. and in the world is that we’ve deprived ourselves of creative process, which we’ve replaced with a constant search for approval and nonsensical “metrics”. It exists in personal life (see: social media) and the corporate world, and it all gets emptier every year. I don’t know how to solve it, and I’m thankful that I can say that I haven’t played much of a role in making the situation worse. At some point, we’ll all tire of this emptiness and get back to reading and writing and creating, one hopes. In any case, before I can solve this whole problem for the world, I need to solve it for myself.

Is it OK to enjoy the deaths of “unicorns”?

It will happen, within a year or few: the era of “unicorpses”. Startups currently valued at billions of dollars will collapse. Some will end entirely, while others will hit valuations of pennies on the dollar compared to their peak values. It’s coming. When? I honestly have no idea. It could happen later in 2016; it could be as late as 2020.

When it comes, I’ll enjoy it. I’m not ashamed of this. Yes, I generally believe, as a human being, that schadenfreude (joy in others’ misery) is a bad thing. I generally don’t wish for other people to fail. I wouldn’t laugh if I saw a stranger slip on the ice and fall. I’d help him up. That said, there are those who deserve to fail and, worse yet, there are those who must fail in order to make space for the good.

I came across this article about schadenfreude: Why Everyone Wants You To Fuck Up. The takeaway is, and I mean this with respect because it could just as easily apply to me: this guy has been in the tech industry for too long.

It’s worthwhile to distinguish several kinds of wanting someone to fail. For example, when George W. Bush became president, I was pretty sure that I didn’t like the guy, but I never found myself wishing, “Man, I hope he fucks up the country so bad that he’ll ruin our image and be judged a failure for fifty years to come”. I didn’t want him to fail at the job in a way that would hurt everyone (but, of course, he did). On the same token, I wish that he had failed at pushing through his brand of conservative values. The bad kind of wanting others to fail is when it dominates to such a degree that you’d be willing to make everyone lose in order to have them fail, or when you want them to fail because of who they are rather than what they are trying to do, when the two can be separated. (In terms of Silicon Valley personalities, they usually can’t be. If someone who beats his girlfriend is made a founder, it’s bad for the culture if he’s allowed to retain his executive position.)

For example, I want Snapchat to fail. I couldn’t care less about the product, but I hate what it says about us as a culture when an uncouth, sexist frat boy can be made into (yes, “made”, because his chickenhawking investors called the shots and are responsible for all of that) a billionaire while so many people struggle. I want merit to genuinely win, which means that Spiegel shall lose. Do I care if he’s reduced to poverty, as opposed to simply being taken out of view? I don’t. I don’t want him to have a miserable life. I just don’t want to live in a world where he’s visibly successful, because it’s unacceptably bad for the world’s values.

I’m not sure if “Silicon Valley”, the place and the culture, can survive “the Unicaust”. We might have twenty dark years after it. We might see another country, currently in obscurity, eclipse us at technology. I don’t know, so I can’t say. However, technology will come back (if it ever leaves, and it may not, since unicorns have zilch to do with true technology) as a force. I have my preferences, which involve its re-emergence as far away from Sand Hill Road as possible. That is, again, not because I have any personal hatred toward the venture capitalists who work there. I don’t even know them! But I hate the values that the current technology industry has embraced, and I look forward to seeing all of those beliefs refuted, brutally and with prejudice.

It’s necessary, before we can move forward, to wash out the founders (and, more importantly, the companies that they create) that believe “young people are just smarter“, or that open plan offices are “collaborative” instead of stifling, or that Agile Scrotums can compensate for an inability to hire top talent because of an awful culture. These people have to go into obscurity; they’re taking up too large a share of the attention and resources.

The technology industry is, of course, full of schadenfreude. One has to be careful about not falling into that mentality. We have a stupidly competitive culture, and we have an ageist culture which leads to people living with a perception of competition against everyone else. Among programmers especially, there is hard-core crab mentality, and it’s a big part of why we haven’t overcome Silicon Valley’s wage fixing, age discrimination, open-plan offices, and our lack of professional organization. If we beat each other down on Java versus Ruby, or on age (which is the stupidest source of division, like, ever) then we’re just making it easy for the slimy Damaso businessmen who’ve invaded our turf (and who run things) to divide us against each other.

However, it’s not schadenfreude to wish failure on that which is harmful. I don’t care if Evan Spiegel’s net worth, at the end of all of this, is $30 million or 17 cents or $5 million in the red. It doesn’t matter to me. He can retire with his millions and drink himself into a blissful stupor, and that’s fine with me; I don’t care. I do care about the simple fact that someone like him should never be held up as (or, as occurred on Sand Hill Road, produced into being) one of the most successful people of my generation. That’s the wrong thing for technology, for the country, and for the world. It’s not just decadent; it’s disgraceful.

We’ve been ignoring basic values and decency for too long. We’ve been allowing VCs to build companies with no ethics behind them, because they’re built to be sold or dead within five years. Here’s the thing: most of us who’ve spent time in and around the VC-funded technology industry know that it’s crooked to the core. We know that it’s in desperate need of reform and that if a few thousand executives’ jobs get vaporized in the process, that’s just fine. It’s hard to convince the rest of the world of the truth right now, though; the counter-refrain is, “It’s hard to argue with success.” I agree. It is very hard. This is why I’ll be elated when the bad guys’ success proves illusory and, at least, a large number of them collapse.

Mood disorders, cheating at Monopoly, a fundamental truth, and more on Agile Scrum

One of the common rules of ethics in Monopoly is not to hide money. While players don’t have to make great efforts to keep their cash holdings transparent, it’s not legal to stick a pair of $500 bills under the board and pretend to be nearly bankrupt in order to negotiate in bad faith, underpay when landing on the “Income Tax” square, or the like.

In the financial world, there are similar policies against “hiding money under the board”, or understating one’s performance in order to appear stronger in future years. If your trades make 50 percent in 2013, you might be inclined to report 15 percent so that, if you have a -10 percent year in 2014, you can report another positive year. Ultimately, there’s some level of performance that betrays high volatility and, if one exceeded that level, it would be socially advantageous to smooth out the highs and lows. In essence, one who does this is misleading investors about the risk level of one’s own chosen strategy.

While this behavior isn’t ethical when applied to financial instruments or board games, it’s something that many people have to do on a daily basis at the workplace. When you have a good week, do some extra useful work and hide it somewhere. When you have a bad week, unload some of your stored work to compensate for the period of slowness. Of course, this discussion naturally leads into the “what’s it really like to be bipolar in tech” question, which I’ve already answered and don’t care to trudge over (it sucks, people are shitty) that frozen muck path yet again. This “hiding money under the board” skill is something you learn quickly with a mood disorder, but I think that it’s worthwhile for everyone, because it’s impossible to predict when some external problem will impede one’s performance.

Broadly speaking, people can be divided into low- and high-variance categories. Society needs both in order to function. While the low-variance people aren’t as creative, we need them for the jobs that require extreme discipline and reliability, even after days without sleep or severe emotional trauma or a corporate catastrophe. High-variance people, we need to hit the creative high notes and solve problems that most people think are intractable. Now, about 98 percent of jobs can be done well-enough by either sort of person, insofar as few corporate jobs actually require the high-variance person’s level of creativity or the low-variance person’s reliability. Given this, it seems odd that whether one is low- or high-variance would have a powerful impact on one’s career (spoiler: it’s better to be low-variance). It does, because corporations create artificial scarcities as a way of testing and measuring people, and I’ll get to that.

There are also, I would say, two subcategories of the high-variance set, although the distinction here is blurrier, insofar as both patterns are seen in most people, so the distinction pertains to proportion. There’s correlated high variance and uncorrelated high variance. People with correlated high variance tend to react in similar ways to “normal”, low-variance people, but with more severity. Uncorrelated high variance tends to appear “random”. It’s probably correlated to something (if nothing else, the person’s neurochemistry) but it doesn’t have patterns than most people would discern. Oddly enough, while uncorrelated variance is more commonly associated with “mental illness”– if someone laughs at a funeral, you’re going to think that person’s “a bit off”– correlated variance can be much more detrimental, socially and industrially speaking. A person with correlated high variance is likely to nose-dive when conditions are ostensibly bad, and that’s when managerial types are on high alert for “attitude problems” and emerging morale crises and pushing for much higher performance (to detect “the weak”) than they’d demand in good conditions. “Hiding money under the table” is hiding variance, and uncorrelated variance is a lot easier to conceal because no one expects it to be there.

Most people would agree that between low- and high-variance there is a spectrum, but not necessarily connect it to anxiety and mood disorders like depression, panic, or bipolar disorder. I disagree. I think that depression and bipolarity are symptoms of many different root causes that we just haven’t figured out how to separate. “Depression” is probably ten different diseases grouped by a common symptom, which is part of what makes it hard to treat. Some depressions respond very well to medication and others don’t. Some go away with improved exercise and sleep habits and meditation, while others don’t. At any rate, I think that the extreme of high variance is going to manifest itself as a mood disorder. This also suggests that mentally healthy people at, say, the 90th percentile of variance might be arguably “subclinically bipolar”, even though they wouldn’t exhibit pathological symptoms. In fact, I don’t think that that’s as far off the mark as it sounds.

People have asked me what the hardest parts of cyclothymia (a rapid-cycling, but usually mild, variety of bipolar disorder) are, and it’s actually not the classic symptoms. Depression sucks, but I haven’t had a depressive episode of longer than a week since 2013 (after a death in the family) and I haven’t had a manic episode since 2008, and I’ll probably never have one again. Number two is the panic attacks, which tend to occur because, as one gets older, pure hypomania tends to be rarer and it’s more common to have “mixed” periods with characteristics of hypomania and depression intermingled. (And what do you get when you combine hypomania and depression? Often, anxiety.) That’s pretty much where I am now. I don’t go from manic to depressive and back; I get 47-49 weeks per year of normal mood (with some anxiety, and an occasional stray panic attack) and about 20-35 days of (mostly mild) cycling, during which I can work and get about just fine, but have (mostly short-lived) “depression attacks” and insomnia and weird dreams and the familiar pressure-behind-the-eyes headache of hypomania.

I said that panic is the #2 hardest, worst thing about it. What’s #1? I call it 20 percent time, as a shout-out to Google. Silicon Valley is full of self-diagnosed “Aspies” who think that they have an autism-spectrum condition, and I think that most of them are off the mark. Yes, there’s a discernible pattern in creative individuals: extreme social ineptitude from ages 5 to 20, awkwardness (improved social ability, but a deficit of experience) from 20 to 30, and relative social normalcy (with, perhaps, stray moments of bitterness) after 30. This is a ridiculously common narrative, but I really don’t buy that it has anything to do with the autism spectrum. People with autism are (through no fault of their own) socially deficient for their entire lives. They might learn to cope and adapt, but they don’t develop normal social abilities in their mid-20s, as people along the high-variance “main sequence” seem to do. In fact, I think that most of the people with this narrative are on a spectrum, but I don’t think it has anything to do with autism. My guess is that they have a subclinical 90th-percentile variety of what is, at the 98th- and 99th-percentile, bipolar disorder. One thing to keep in mind about mental illness is that its stigma is amplified by the visibility of the extreme cases. Below the water line on that metaphorical iceberg, there are a large number of people who aren’t especially dysfunctional and, I would argue, many undiagnosed and subclinical “sufferers” who experience little more than mild social impairment.

Mood disorders are notoriously hard to diagnose, much less treat, in childhood and adolescence, and people commonly offer quips like “teenagers are just manic-depressive in general”. That’s not really true at all. Teenagers aren’t “naturally bipolar”. What is true is that children and adolescents have exaggerated moods that exist to reward and punish behaviors according to their social results. This is a specific subtype of “high variance” that is correlated (unlike the more uncorrelated high variance that is seen in mood disorders). That’s how social skills are learned. You kiss a girl, and you feel great; you’re picked last for a team, and you feel terrible. In this period of life, I’d guess that the garden-variety, not-really-bipolar, high-variance (85th to 95th percentile) people don’t seem especially mood-disordered relative to their peers. But, at the same time, they’re spending 10 to 20 percent (and maybe more) of their time in a state of consciousness where their moods are somewhat uncorrelated to social results and, therefore, acquisition of social skills is halted. That state of consciousness is good for other things, like creative growth, but it’s not one where you’ll learn others’ social cues and messages, and how to move among people, at an optimal rate. This explanation, I think, is better than subclinical autism in getting to the root of why creative people are invariably socially deficient before age 25, and why a good number (more than half, I would guess) recover in adulthood. If you’re 40, the effect of “20% time” is that you’re “socially 32”, and no one can tell the difference. If you’re 20, and your “social age” is 16, that’s a fucking disaster (or, statistically more likely, a not-fucking disaster). The age of 25 is, approximately, the point at which being 10 to 20 percent younger in social ability is no longer a handicap.

What does this have to do with Silicon Valley and the workplace? I want to be really careful here, because while I think that high-variance people (and I, obviously, am one) experience across-the-board disadvantages, I don’t want to create tribal bullshit around it. High-variance people aren’t necessarily “better” than low-variance people. There are high-variance people with awful moral character and low-variance people with great moral character. It’s important to keep this in mind, even while “the Big Nasty” among the working world’s conflicts (and, probably, organizational conflicts in general) is usually going to come down to that between the high-variance people of strong moral character and the low-variance people of bad moral character. (Lawful good and chaotic evil are organizationally inert.) In the first set, you have the “chaotic good” archetype; in the movies, you always root for these people to win because, in real life, they almost never do. They’re usually not diagnosably bipolar, but they’re certainly not well-adjusted either. They’re courageous, moralistic, and intense. In the second set, of low-variance people with bad moral character, you have psychopaths. Psychopaths aren’t even affected by the normal sources of mood variance, like empathy, love, and moral conflict.

Psychopaths are the cancer cells of the human species. They are individually fit, at the expense of the organism. Now, there are plenty who misinterpret such claims and expect it to predict that all psychopaths would be successful, which we know not to be true. In fact, I’d guess that the average psychopath has am unhappy life. They’re not all billionaires, clearly, because there are only a handful of billionaires and there are a lot of psychopaths out there. Analogously, not all cancer cells are successful. Most die. (The “smell of cancer”, infamous to surgeons, is necrosis.) Cancer cells kill each other just as they kill healthy tissue. Cancer doesn’t require that all cancer cells thrive, but only that enough cells can adapt themselves to the organism (or the organism to themselves) that they can enhance their resource consumption, reach, and proliferation– causing disease to the whole. Worse yet, just as cancer can evade and repurpose the body’s immune system toward its own expansion, psychopaths are depressingly effective at using society’s immune systems (ethics, reputation, rules) toward their own ends.

What can the rest of humanity do to prevent the triumph of psychopaths and psychopathy? Honestly, I’m not sure. This is an arms race that has been going on for hundreds of thousands of years, and the other side has been winning for most of that time. Coming to mind (and bringing negative conclusions) is Ex Machina, a movie (which I’ll spoil, so skip to the end of this paragraph, if you don’t want that) that contends with some of the darker possibilities behind “Strong AI“. The three main characters are Caleb, a “straight man” programmer of about 25; Nathan, a tech billionaire with sociopathic tendencies; and an AI who goes “way beyond” the Turing Test and manages to convince Caleb that she has human emotions, even though he knows that she is a robot. She’s just that good of a game player. She even manages to outplay Nathan, the carbon-based douchebag who, remaining 5 percent normal human, can be exploited. Psychopaths, similarly, have a preternatural social fitness and are merciless at exploiting others missteps and weaknesses. How do we fight that? Can we fight it?

I don’t think that we can beat psychopaths in direct combat. Social combat is deeply ingrained in “the human organism”, and whatever causes psychopathy has had hundreds of thousands of years to evolve in that theater. As humans, we rank each other, we degrade our adversaries, and we form an ethical “immune system” of rules and policies and punishments that is almost always repurposed, over time, as an organ for the most unethical. Whatever we’ve been doing for thousands of years hasn’t really worked out that well for our best individuals. I think that our best bet, instead, is to make ourselves aware of what’s really going on. We can succeed if we live in truth, but this requires recognizing the lie. Is it possible to detect and defeat an individual psychopath? Sometimes, yes; sometimes, no. Beating all of them is impossible. If we understand psychopathy in terms of how it works and tends to play out, this might give us more of an ability to defend ourselves and the organizations that we create. We’ll never be able to detect every individual liar, but we can learn how to spot and discard, from our knowledge base, the lies themselves.

This takes us back to the variance spectrum. Every organization needs to rank people, and the way most organizations do it is what I call “the default pattern”: throw meaningless adversity and challenges at people, and see who fails out last. Organizational dysfunction can set in rapidly, and even small groups of people can become “political” (that is, trust-sparse and corrupt) in a matter of minutes. It doesn’t take long before the dysfunction and stupidity and needless complexity and recurring commitments exceed what some people can handle. There is, of course, a lot of luck that dictates who gets hit hardest by specific dysfunctions. On the whole, though, human organizations permit their own dysfunction because it allows them to use a blunt (and probably inaccurate) but highly decisive mechanism for ranking people and selecting leaders: whoever falls down last.

Let’s imagine a world where truck drivers make $400,000 per year. With a large number of contenders for those jobs, the entrenched and well-compensated drivers decide (in the interest of protecting their position) that only a certain type of person can drive a truck, so they create a hazing period in which apprentice drivers must tackle 72-hour shifts for the first three years. You’d have a lot of drug abuse and horrible accidents, but you would get a ranking. If you had them driving safe, 8-hour shifts, you might not, because most people can handle that workload. In the scenario above, you filter out “the weak” who are unable to safely drive extreme shifts, but you’re also selecting for the wrong thing. The focus on the decline curve (at the expense of public safety) ignores what should actually matter: can this driver operate safely under normal, sane conditions?

It isn’t intentional, but most organizations reach a state of degradation at which there are measurable performance differences simply because the dysfunction affects people to differing degrees. In an idyllic “Eden” state, performance could theoretically be measured (and leaders selected) based on meritocratic criteria like creative output and ethical reliability (which, unlike the superficial reliability that is measured by subjecting people to artificial stress, scarcity, and dysfunction, actually matters). However, none of those traits show themselves so visibly and so quickly as the differences between human decline curves at the extremes. This raises the question: should we measure people based on their decline curves? For the vast majority of jobs, I’d say “no”. The military has a need for ultra-low-variance people and has spent decades learning how to safety test for that (and, something the corporate world hasn’t managed, to keep a good number of the psychopaths out). But you don’t need an Army Ranger or a Navy SEAL to run a company. It probably won’t hurt, but most companies can be run by high-variance people and will do just fine, just as most creative fields can be practiced by low-variance people.

The advantage of psychopaths isn’t just that they tend to be low-variance individuals. If that were all of it, then organizations could fill their ranks with low-variance non-psychopaths and we’d be fine. It’d be annoying to be a high-variance person (and know that one would probably never be the CEO) but it wouldn’t make our most powerful organizations into the ethical clusterfucks that virtually all of them are. The psychopaths’ greater advantage is that they aren’t affected by dysfunction at all. When an organization fails and unethical behavior becomes the norm, the high-variance people tend to fall off completely while the decent low-variance people decline in lesser degrees– they’re disgusted as well; it just has less of an effect on their performance– but the psychopaths don’t drop at all. In fact, they’re energized, now that the world has come into their natural environment. If they’re smart enough to know how to do it (and most psychopaths aren’t, but those who are will dominate the corporate world) they’ll go a step further and drive the environment toward dysfunction (without being detected) so they can have an arena in which they naturally win. Social climbing and back stabbing and corporate corruption deplete most people, but those things energize the psychopath.

We come around, from this, to concrete manifestations of damaged environments. In particular, and a point of painful experience for software programmers, we have the violent transparency of the metrics-obsessed, “Agile Scrotum“, open-plan environment. This is the environment that requires programmers (especially the high-variance programmers who were attracted to the field in search of a creative outlet) to, breaking the rules of Monopoly and financial reporting, hide “money” under the table. Agile Scrotum and the mandates that come out of it (“don’t work on it if it’s not in the backlog; if you must do it, create a ticket and put it in the icebox”) demands people to allow visibility into their day-to-day fluctuations to a degree that is unnecessary, counter-productive, and downright discriminatory.

Agile Scrotum also hurts the company. It makes creative output (which rarely respects the boundaries of “sprints” or “iterations” or “bukkakes” or whatever they are calling it these days) impossible. I’ve written about the deleterious effects of this nonsense on organizations, but now I’m going to talk about their effects on people. When the new boss comes in and declares that all the workers must whip out their Agile Scrotums, for all the world to see and pluck at; the best strategy for a high-variance person (whose day-to-day fluctuations may be alarming to a manager, but whose average performance is often several multiples of what is minimally acceptable) is, in my mind, to hide volatility and put as much “money” under the table as one can. Achieve something in July when your mood and external conditions are favorable, and submit it in August when you hit a rough patch and need some tangible work to justify your time. Yes, it’s deceptive and will hinder you from having a healthy relationship with your boss, but if your boss is shoving his Agile Scrotum down your throat (wash out the taste with a nice helping of user stories and planning poker) you probably didn’t have a good relationship with him in the first place, so your best bet is to keep the job for as long as you can while you look for a Scrum-free job elsewhere.

I promised, in the title, a “fundamental truth” that would be useful to the “neurotypical” people with no trace of mood disorder, and to the low- and high-variance people alike, and now I’m at it. This steps aside from the transient issues of open-plan offices and the low status of engineers that they signify. It’s independent of the current state of affairs and the myriad dysfunctions of “Agile” management. It’s probably useful for everyone, and it’s this: never let people know how hard you work, and especially don’t let them know how you invest your time.

People tend to fall into professional adversity, as I’ve observed, not because their performance is low or high, but because of a sudden change in performance level. Oddly enough, going either way can be harmful. Sudden improvements in performance suggest ulterior motives, transfer risk, or a negative attitude that was held just recently, in the same way that a sudden improvement of health might upset a jealous partner. If you “ramp it up”, you’re likely to expose that you were underperforming in the past. Likewise, the people most likely to get fired are not long-term low performers, because organizations are remarkably effective at adapting to those, but high performers who drop to average or just-not-as-high performance. Most managers can’t tell who their high and low performers actually are, because their people are all working on different projects, but they can detect changes, especially in attitude and confidence, so you’re in a lot more danger as an 8 who drops to a 6 than as a 3 who’s always been a 3. This is, of course, one of the reasons why it’s so awful to be a high-variance person in a micromanagement-ridden field like software. As a strategic note, however, I think that it’s valuable for low-variance people as well to understand this, too. You don’t want to be seen as a slacker, but you don’t want people to see you as “a reliable hard worker” either. People with the “hard worker” reputation often get grunt work dropped on them, and can’t advance. What you really want, if you’re gunning for promotions, is for people to look at you and see what they value, which will not always be what you value. Some people value sacrifice, and you want them to see you as dedicated and diligent (and so hard-working and busy that you can’t be bothered to take on the kinds of sacrificial duties that they might otherwise want to foist upon you, in order to even out the pain load). Other people (and I put myself in this category) value efficiency and you want them to see you as smart, creative, and reliable on account of sustainable practices and personal balance.

Achieving the desired image, in which people see their own values reflected in you, isn’t an easy task. I could write books on that topic, and I might not even solve anything, because there’s a ton of stuff about it that I don’t know. (If I did, I might be richer and more successful and not writing this post.) I do know that control of one’s narrative is an important professional skill. How hard one actually works, and what one’s real priorities are, is information that one needs to keep close, whether one’s working 2 hours per day or 19 hours per day. (In my life, I’ve had many spells of both.) People who can control their own narratives and how they are perceived generally win, and people who can’t control their stories will generally have them defined by others– and that’s never good. One might sacrifice a bit of reputation in order to protect this sort of personal information, and I can already feel a bristling of that middle-class impulse to desire “a good reputation” (not that it stops me from writing). Here’s what I think, personally, on that. “Good” and “bad” reputations are transient and the distinction is sometimes not that meaningful, because what seems to matter in the long run (for happiness and confidence, if not always agreeability and of-the-moment mood) is control of one’s reputation. Even personally, that’s a big part of why I write. I’d rather have a “bad” reputation that I legitimately earned, because I wrote something controversial, than a “good” reputation that was defined for me by others.

What irks me about Silicon Valley’s culture and its emphasis on micromanagement is not only the meaningless (meaningless because what is made transparent has nothing to do with questions of who is actually doing the job well) violent transparency of open-plan offices and Agile Scrotums. That stuff sucks, but it bothers me a lot more when people seem not to mind it. It’s like being in a strategy game where the lousy players add so much noise that there’s no purpose in playing– but not being allowed to leave. For example, I’ve always argued that when a manager asks for an estimate on how long something should take, one should ask why the estimate is being requested. It’s not about a desire to put off planning or to hide bad work habits. It’s about parity. It’s about representing yourself as a social equal who ought to be respected. Managers may not understand Haskell or support vector machines, but they know how to work people, and if you give them that information for free— that is, you furnish the estimate without getting information about why the estimate is important, how it will be used, what is going on in the company, and how performance is actually evaluated– then they’re more likely to see you as a loser. It’s just how people work.

Likewise, if someone has day-by-day visibility into what you are working on, and if that person knows on a frequent basis how hard you are working (or even thinks that he knows), then you are being defeated by that person, even if your work ethic is admirable. Being visible from behind, literally and metaphorically, while working shows low status in the work and the person doing it. All of this is not to say that you shouldn’t sometimes share, on your terms, that you worked a 90-hour week to get something done. On the contrary, proof of investment is far more powerful than talk alone. At the same time, it should be you who decides how your narrative is presented, and not others. The passive transparency of an Agile shop– the willingness of these programmers to have their status lowered by a process in which they give far too much up and get nothing in return– makes that impossible. When you buy into Agile Scrotum fully, you’re also implicitly agreeing that contributions that aren’t formally recognized as tickets don’t matter, and allowing your work output to be negatively misrepresented, and possibly without your knowledge, by anyone with access to the tracking software. Isn’t that just putting a “Kick Me” sign on your own back?

I am, for what it’s worth, well-traveled. I’ve seen a lot of human psychology, and I’ve learned what I would consider a sizable amount. One recurring theme is that humans expect a certain logical monotonicity that doesn’t hold water. Logically, if A implies B and (A and C) is true, then B is true. In other words, having more information (C) doesn’t invalidate concluded truths. In the human world of beliefs and near-truths and imperfect information, it’s not that way at all. Of course, there are valid mathematical reasons for this. For example, it could be that B has a 99.99999% chance of being true when A is true and A “effectively implies” B, while C has an even stronger negative impact and implies not-B. Then A almost implies C but B definitely implies not-C. More commonly, there are computational reasons for non-monotonicity. “Flooding” a logical system with irrelevant facts can prevent valid inferences from ever being made, because computation time is wasted on fruitless branches, and flooding an imperfect logical system (like a human knowledge base) can even populate it with non-truths (in technical terms, this is called “full of shit”). The violent-transparency culture of open-plan offices and Agile is based on phony monotonicity. First, it assumes that more information about workers is always desirable. In human decision-making, more information isn’t always good. Shitty, meaningless information that creates biases will lead to bad decisions, and get the wrong people promoted, rewarded, punished and fired. Second, it ties into a monotonicity that specifically afflicts high-variance people (and advantages psychopaths), which is the perception that small offenses to social rules betray large deficiencies. That’s also not true. There’s really no connection between the meaningless unreliability of an office worker who, after a long night, shows up at 9:05 the next day; and the toxic ethical unreliability that we actually need to avoid.

I have, between the minor unpleasantness of a mood disorder and the major unpleasantness of the software industry, seen a lot of crap. I use the word crap specifically for its connotation of low quality, because let’s be honest about the problem of specifically low-quality information and what it does to humans in large amounts. Agile Scrotum generates a lot of information about who’s working on what and at what speed, and that information will re-order the team’s pecking order; and, guess what, it’s all crap. It takes the neurological variance that has to accompany creative high performance, because Nature or God couldn’t figure out any other way, and turns it into meaningless signals. The same holds for open-plan offices, which bombard engineers and management both with meaningless information (unless the detail of when each person goes to the bathroom is somehow important) that is then held to reflect on individual performance and value within the group. As again, it fills the office with piles of low-quality information and, soon enough, the only thing that anyone can smell is crap. This is one thing that mood disorders make a person better at than most people: detecting crap. When a malfunction of my own wetware tells me that everything I’m doing is pointless and that I should just crawl in a hole and die, I can listen to it, or I can say, “That’s crap” and move on. When an adverse situation throws me for a loop and I get anxiety, I can recognize it for what it is and get past it. (I may have a panic attack, but I don’t get the emotional drama that seems to afflict the neurotypical, because I recognize crap that comes from my own mind. If someone cuts me off in traffic and I still have anxiety or anger, five minutes later, that’s on me.) I’ve survived by recognizing flaws in my own cognition with a precision that 95 percent of people never have to develop. This also makes me preternaturally keen at spotting (less intense, but longer-lived) flawed cognition in others. In other words, it makes me great at cutting through crap.

So now I’m going to speak of (and possibly to) the software industry and try to get through a lot of crap. People learn to program (or to write, or to paint) in order to implement ideas. Sometimes we want to implement our ideas, and sometimes we want to implement good ideas regardless of whose they are. I think that it’s useful to have a healthy balance between the two: exploring your own ideas, and spending time with others’ ideas that (one hopes) have a higher-than-baseline chance of actually working. Many of us (myself included) were drawn into the craft for the creative potential. So, when I see a shop using Agile Scrum, the only question I can ask is, what the fuck happened? This macho-subordinate ceremony isn’t going to make me better at my job. It’s not going to teach me new things about programming and computation in the way that Haskell did, it’s not going to improve my architectural or aesthetic sense, and a bunch of psych-101 bullshit designed to trick me into liking being a subordinate is certainly not going to make me a better leader. None of it has any value to me, or to anyone, because it’s all crap.

People with my condition live 10 to 15 years less than an average person. That’s statistical and, as a non-drinker and non-drug-user who hits the gym every morning and hasn’t had a major problem with it for years, I’m pretty sure that I’ll beat the odds. I’m treated and as “sane” as anyone else, but I also have the (extremely low probability, high impact) sword of a recurrence of 200x Crazy hanging over my head. I’m aware of my mortality. Even a single minute spent learning how to write fucking user stories is a minute not spent learning something that actually matters. Or relaxing and working on my health. Or at the gym. Or sleeping. Or writing. I factually don’t have the time to learn and practice that garbage, but given that I’m not really any more or less mortal than anyone else, I can’t really justify the idea that anyone else should have to do it, either. If they’re really such poor programmers that the humiliation of Agile Scrotum “makes sense” for them, then why not convince them to work on the skills that they lack, instead? It’s crap and, rather than holding on to it, we should throw it away. Our industry is already too cluttered. Even without the clutter, there would be more things genuinely worth working on than there is time and talent to do them. With the clutter, it’s hard enough to get started on just one. We need to get serious about ourselves, our relationship to the world, and computer science and technology themselves. This industry is full of people waxing philosophical about “the Singularity” and biological immortality, but are we really ready to talk about immortality if some of us think that it’s okay to spend time on “backlog grooming” meetings?

That, like much that I have written, took balls to finish. By this, I mean metaphorical “balls” because not only do male sex organs fail to be a prerequisite for courage, but they’re not even a correlate of it, in my experience. Anyway… wanna know what it didn’t require? It didn’t take an Agile Scrotum.

Big picture first

Why are the ethics in the software industry so bad? Why do people like Evan Spiegel get made– and, make no mistake, most of them are produced by their backers, merit having nothing to do with it– into billionaires? And why are the products made by the software industry so often of low quality? Why do we, despite practicing a craft that takes 10-20 years to get any good at, tolerate a culture of age discrimination? These are often treated as separate questions because, as engineers and problem solvers, that’s something we like to do: take problems apart and solve them separately. In mathematics, that’s a powerful approach, because a proof is, in practice, a derivation of an often not-obvious result through a chain of simple and evidently true inferences. In human problems, this approach often falls short, because the issues are so interconnected that one problem, typically, can’t be solved in isolation from all the others.

For example, I could write 100,000 words about why open-plan offices, to take one problem in our industry, are bad. They damage productivity, they’re bad for worker health, they encourage rotten cultures, they increase the incidence of sexual harassment, and they can trigger or worsen anxiety disorders, even in top performers with “nothing to hide”. Worse yet, I can illustrate why they are unlikely to go away any time soon. It’s not about these offices being cheaper (they are, but not by enough to justify a productivity loss that is orders of magnitude larger than what is saved) but about maintaining a certain image, even at the expense of health and productivity. In particular, it’s more important that the VC-funded startups look productive to their investors than for them to actually be productive, given the high likelihood that any product that is built will be scrapped when the company is acquired. (The non-VC-funded companies are following suit on this horrible trend, but the cultural pace here is set by that cesspool called “Silicon Valley”.) An open-plan programmer isn’t hired for his coding ability, which is rendered irrelevant by the awful environment, but to be a stylish piece of office furniture. Is it any surprise, then, that we’d also have an ageism problem and a culture of sloppy craftsmanship? Ultimately, though, people know that open-plan offices are bad. The 100,000 words I could spill on the topic wouldn’t make a difference. We don’t need to persuade people or speak “truth to power”, because those in power already know what the truth is. We’ll probably have to fight them.

Google’s motto, “Don’t Be Evil”, comes to mind. Of course, it’s hilarious that this would be a corporate motto of a company that uses stack-ranking to disguise as “performance”-based firings what other companies would own up to as business-based layoffs, and that the origin of this slogan would be Paul Buchheit himself. Never mind that, though, because I think that it’s actually a great slogan for a company like Google, and here’s why. It’s probably the most ineffective three words in the English language. When you’re staring down evil, you can’t persuade it not to be what it is. Telling evil not to be evil is like telling a tire fire to stop polluting or a murderer that what he’s about to do is a bad thing. It won’t work. Likewise, fifty more essays on the harm to health and product inflicted by open-plan offices won’t make a difference in a world where founders are convinced that their programmers are more useful as office ornaments than for their problem-solving and coding ability.

So why is the VC-funded startup ecosystem so ugly? Is it just that there’s a lot of money in it? This I doubt, because there’s a lot more money in finance, where the ethics are (quite honestly) better. I think the answer is that, at the “big picture” level, it’s impossible to separate what we do from how we do it. I used to think otherwise. I once believed that the macroscopic social value of the work and the micro-level decency around how people do the work were orthogonal. To the credit of that view, there are many organizations with obvious positive macroscopic value but awful behavior internally. (Don’t get me started on the non-profit sector and, especially, the way non-profit workers are treated by their managers.) The False Poverty Effect (of powerful people behaving unethically because their low incomes, relative to their peer group, leave them feeling martyred and entitled) is as pronounced in non-profits as in software startups. All of this said, I’m increasingly concluding that positive macroscopics are a necessary (if not sufficient) condition for a culture of decency. In other words, we can’t fully understand and oppose organizational rot without concern to the specific matter of what that organization does. Then, let’s talk about macroscopics. What exactly do we, in the software industry, do? Most of us, to be blunt, haven’t a clue what our effect on society as a whole really is.

What can we do? Quite honestly, the answer is “anything”, because it’s not software that distinguishes us but abstract problem-solving ability. (It’s important to keep this in mind, and not get into tribal scuffles with hardware engineers, academics, research scientists, and other high-power allies that we’ll need in order to overthrow our enemies and establish a true intellectual meritocracy.) Software is one mechanism that we use often because, right now, it works very well, but software itself isn’t what’s interesting to me. Solving problems and implementing new ideas is what’s interesting. At any rate, I’ve never worked in a company where I couldn’t do the executives’ jobs, often better than the people holding those positions, but it’s rare that I’ve met a business executive who could do my job. That’s the sociological reason why software engineers “have to” be kept down with processes like Scrum, open-plan offices, and an age discrimination culture that shunts them away once they’re halfway good at what they’re doing. The smartest subsector (the “cognitive 1 percent”) of the subordinate caste, whether we’re talking about literate Greek slaves in ancient Rome or software programmers today, has always been a problem for the patricians and owners. Okay, so let’s narrow the question. What can we do with software? Even to that, the answer is a pretty large set of things. Now, what are we doing with software? Curing cancer? Making cars safer or houses more energy-efficient? Nope. Our job, almost invariably, is to enable businessmen to unemploy people. That’s what we do, and that’s why these VCs and founders pay us.

I’m not going to argue that cost cutting is inherently “evil”, because it’s not. Most of the jobs that have disappeared in the past 40 years are unpleasant ones that few people would really want. The problem isn’t that those jobs are gone, but that the people displaced haven’t been given the opportunity or the resources necessary to train up into better roles. The problem with cost cutting is that the people who have both the intelligence to do it right (and genuinely improve efficiency, rather than just externalizing costs to someone else) and the moral decency to make sure that the returns are distributed equitably, including to the people who are displaced. For every person who has the skill to cut costs in a way that has everyone winning, there seem to be 199 who are just moving costs around and hurting the people who don’t have the power to fight back, and it’s those 199 other ones to whom most programmers answer.

Cost cutting, in my view, is only valuable in the context of it being a necessary prerequisite for making newer and better things. Doing the same thing more cheaply isn’t intrinsically useful, unless the resources freed up are spent in a beneficial way. Moreover, often the cutting of economic costs is a minor side benefit achieved in the course of what actually matters: cutting complexity. Complexity takes a number of forms, most undocumented: power relationships, processes and ceremonies, expectations and recurring commitments. Most of these are difficult to measure in the short term. Unfortunately, the inept cost cutters out there tend to be cost externalizers who increase the total complexity of the system they inhabit. For example, a technology startup might decide to hide a layoff by instead imposing stack-ranking and removing “low performers”. It sounds like a great idea to everyone, insofar as everyone has their own opinions of who those low performers are. However, the stack ranking and the machinery around it– the long “calibration” sessions in which managers horse-trade and play elaborate games against each other with their reports’ careers– creates incentives for political behavior and makes the company more complex (and in a uniformly and objectively undesirable way) while not cutting costs in any meaningful way. In the long term, complexity increases while resources decline and the result is a brittle system, prone to catastrophic failure.

So what is evil, in the context of our moral responsibility as technologists? And what is good? There’s a very simple and, in my view, correct answer. Technology has a simple job: to eliminate scarcity. That’s our moral calling. It’s not just to “write good code” or make processes more efficient. It’s to solve problems and chip away, almost always incrementally, at the face-eating demons of economic scarcity and class subordination. Otherwise, all this work that we do in order to understand computers and how to solve difficult technical problems would just be pointless mental masturbation.

Of course, technology can be used toward opposite effects: to create new scarcities and enforce class subordination in ways that weren’t possible, decades ago. When I Google myself, it’s not hard to come upon the various events that have occurred in my life specifically because certain entrenched interests now see me as a threat, and have attempted to destroy my reputation. The social and technical changes brought about by the Internet are mostly good, for sure, but the bad guys know how to use these tools as well.

We need to disavow ourselves of the notion that we can code or “merit” our way out of this. If we continue to allow our efforts to be directed and deployed by a ruling class with bad intentions, we’ll continue to suffer the consequences on the micro level as well. They’re not going to treat us better than they treat the rest of society; it just doesn’t work that way. Consequently, if we want to get ourselves rid of the open-plan offices and the “Agile Scrotum” micromanagement, we can’t just focus on the micro-scale battles about programming languages and source code style. It’s not that those issues aren’t important to how we do our work; they are. That said, the one thing that becomes increasingly clear as I get older is how interconnected everything is. We can’t expect “a good culture” in an industry so willing to deploy our talents toward the creation of artificial scarcities. If we work for evil, our work will be evil and we will experience evil for every day that we do so.

Of programmers and scrubs

I’m currently working through the book, Playing to Win, by David Sirlin. It’s excellent. I’ll probably buy a copy, and I’d recommend this book even to people who aren’t especially interested in competitive gaming, because it’s not really about the (tabletop and video) games themselves so much as the cultures that they create, and the lessons from those cultures are more generally applicable to human organizations.

One of the archetypes that Sirlin describes is the Scrub, which is a player who insists on rules that aren’t in the game, and views certain effective strategies as “cheap”. Scrubs aren’t always unskilled players, but they’re rarely the most effective ones, because they insist that only certain types of wins are acceptable. This prevents them from exploring all possible strategies, and it also leads them to become unduly emotional when they face opponents using strategies that they view as “cheap”.

Sirlin doesn’t advocate cheating or bad sportsmanship, but he argues that competitive games are best played to win, and not to other effects that, at high levels, will make one unable to compete. That is, your goal in a game of Magic: the Gathering isn’t to build up a 23/23 creature with enchantments or to avoid taking any damage or necessarily to launch a 30-point Fireball. It’s to bring your opponent to a losing condition (0 life or no cards to draw) while preventing her from doing the same to you. That’s it. You do not cheat, and you should not be a poor sport, but you play the rules as they are, not as they might be in some idealization of the game (that is, quite often, actually an inferior game).

I’ll confess it: I used to be a Magic scrub. When I was a teenager, I loved creatures and fast mana and beefy monsters (Force of Nature! Scaled Wurm! Shivan Dragon!), and this was in the 1990s, when creatures were so under-powered that top tournament decks used few or none. I did all sorts of flashy scrub stuff like use Channel to bring out a second-turn Scaled Wurm, which is terrible because you’re giving up 7 life for something that can be killed with two black mana. I did have a blue deck (blue, with its manipulative focus and being the most “meta” of the colors, was the most powerful color at the time, and probably is still) but I rarely played it. I was all about the green and red: 10-point fireballs, 8/8 tramplers, and so on. I viewed reactive strategies, such as counterspell decks, land destruction decks, and hand destruction decks, as “cheap” and borderline unethical. (Stop fucking with my cool shit! Build up your own cool shit!) I lost frequently to better deck-constructors using “boring” strategies and it always made me angry.

I was in the wrong, of course. Land destruction (i.e. resource denial, by which I mean rendering your opponent unable to cast spells by demolishing the most important resources) is a perfectly acceptable way to win the game. Think it’s broken? Then find a way to defend against it, or build your own deck and exploit it. There are some things not to like about Magic (well, only one big one: the tie between in-game capability and real-life finances) but the existence of reactive decks isn’t one of them.

What makes a scrub? In my view, scrubs often want to play multiplayer solitaire. They want to build beautiful edifices, and not have interference from the opponent, except perhaps at the grand end of the game. They want a game of skill (as they define it) rather than a messier game of strategic interactions. They aren’t much for the competition aspect of the game; they want to build a 13-foot-tower and have their opponents come up to 12 feet, not face an opponent who (legally according to the game, and therefore ethically) reduces theirs to zero and builds a 1-foot tower. Strategies they find unacceptable infuriate them, and they’ll often complain about being beaten by “weaker” players. If they get left out of Monopoly negotiations and are too late to get their share of the game’s 12 hotels, they get very angry and accuse the other players of having “ganged up on” them, when that’s almost never what actually happened. What usually happened is that the scrub failed to play the negotiatory game that is far more important than the roll-and-move dynamic that sets that negotiation’s initial conditions.

The scrub, at his core, takes a simplified version of the game, embedded within the actual game that is described by the roles, and declares it better. To be clear about it, sometimes (but rarely) the scrub is right. Some games have such bad designs that they fail on their own terms. However, many more games (e.g. Monopoly) succeed on their own terms but might not deliver the kind or quality of play that is desired. That doesn’t mean that they’re awful games, but only that they fall short of a specific aesthetic. It’d be hard to say that Monopoly is a terrible game, given its success and longevity, but it doesn’t satisfy the “German style” aesthetic to which most tabletop gamers (including myself, to a large degree) subscribe. All of this said, in most cases, the “scrubbed” version of the game, in which otherwise legal moves are banned, is less interesting than the original one.

So what do scrubs have to do with programming and the workplace? Well, the first question is: what is the game? The answer is perhaps not a satisfying one: it depends. The objective might be:

  • to write a program that solves a problem using as little CPU time as possible.
  • to write a program that solves a problem, and with maintainable source code.
  • to produce a solution quickly in order to fend off political adversity.
  • to choose a problem that is worth solving, and solvable, then solve it.

It’s rarely spelled out what “winning” is, but there are truths that make programmers unhappy. Foremost is the fact that good code is only a small factor in determining which program or programmer wins. Most of us live in the real world and have to play the coarse, uncouth, political game in which the objectives don’t have much to do with code quality. The ultimate goal is usually (for an individual or a firm) to make money, making the proximate goal success in the workplace. And when we get trounced by “inferior” players who write shitty code or who can’t even code at all, it makes us angry.

Now, I will argue that, from a game-design perspective, the modern workplace game is poorly designed. One notable and common flaw in a game is a king-maker scenario, in which players incapable of winning can, nonetheless, choose who the winner is. (The reason why many house varieties of Monopoly forbid sale of properties below on-board price: to prevent losing players from selling everything at $1 to the person who angered them least.) Whether this is a flaw or an additional social element to play depends on opinion, but it’s generally considered to be a sign of poor design. The workplace game is one where the inferior players have a major, game-killing effect, and those who dominate it are those who are most skilled at manipulating and weaponizing the inferior players. This is an ugly process, and when thousands to millions of dollars and health insurance are at stake, people have a justified dislike of it. The game design is shitty and the game is not fun for most people, and the people who find it fun tend to be the worst sorts of people… but these are technical notes. The sin of the scrub is failing to recognize the rules as they are rather than as one thinks they should be. In this way, most of us are scrubs. This often blinds us to what is happening. When we get beaten by “inferior” programmers (“enterprise Java” programmers and management aspirants who spend ten times as much time on politics as on coding) we’re nonetheless getting beaten by superior game-players. We out-skill them in the part of the game that we wish mattered most– writing high-quality software– but they’re superior at turning something that we view as an ugly, disrupting landscape feature– the hordes of stupid, emotional assholes out there who utter phrases like “synergize our timeboxed priorities” and use “deliver” intransitively in between grunts and slobbering– into a tailwind for them and a headwind for us.

I catch myself in this, quite often. Let’s take programming languages. Java and C++ are ugly, uninspiring, depressing languages in which, not only does code take a long time to write, but the code is often so verbose that it’s usually unmaintainable. I would like to be able to say that the person who copy-pastes his way to a 20,000-line Java solution, as opposed to my elegant 400-line Haskell program, is “cheating”. For sure, I can make a case that my solution is better: easier to maintain, more likely to scale, less prone to undesirable entanglements and undocumented dependencies. On the other hand, to say that the “enterprise Java guy” is unethical (and, yes, I’ve heard language warriors make those kinds of claims) is to let my Inner Scrub talk. It’s possible that my beautiful Haskell program is an 8/8, trampling Force of Nature and the Java program is a Stone Rain (land destruction). Well, sometimes land destruction wins. Does it “deserve” to, in spite of my insistence that it’s an “ugly” way to win? Well, my opinion on that doesn’t matter. The rules of the game are what they are, and as long as people don’t cheat, what right do I have to be angry when others use different strategies?

Of course, some might find my depiction of office politics as “a game” to be sociopathic. The stakes are much higher, and people get killed by software failures, so it seems evident that the anti-scrub “if the rules don’t say it’s illegal, you can do it” attitude is cavalier at best and reckless at worst. I don’t have an easy way to address this, and here’s why. Not all things in life are, or should be, competitive games. The majority of our society’s most important institutions are economic game-players. Is that morally right? I don’t know. I don’t know how to make it something different without risk of making it worse. It’s also unclear what “the rules” are and how we define cheating. Is lying on one’s CV simply another strategy, with attendant risks and penalties, or should people who do it get out-of-game punishments, like card cheaters do? I think most would agree that quack doctors deserve jail time, while those who harmlessly upgrade titles to improve social status (e.g. Director to Senior Director, not Associate to CEO, the latter being genuine dishonesty) or massage dates (to cover gaps) probably don’t even deserve to be fired. It gets murky and complicated, because business ethics are so poorly defined and especially because the most successful players tend to be those who bend the rules beyond form. We can argue “outside of the system” that many of these people are unethical, and therefore indict the system. We can say that a game like office politics doesn’t deserve to have its outsized impact on a person’s income and professional credibility– and I, for one, would strongly agree with that. What we can’t do is deny that office politics is a game and (to rip off The Wire) “the game is the game”.

One common flash-point of scrub rage is the “performance” review. Politically unsuccessful high performers often get negative reviews, to their chagrin. Perhaps this is because they’re used to school, which is far more meritocratic than the office. I’ve definitely gotten poor grades from teachers who liked me and good grades from those who didn’t, because academic grades are mostly based on actual performance. Young professionals often expect this to continue into adulthood, and become furious when their “performance” reviews reflect their political standing rather than what they actually did. (“You can say you don’t like me, you can even fire me, but don’t you fucking dare call me a low performer!”) They take it as a hit to their pride and an injustice rather than as what the performance review actually is: a feature of the game that was used against them. Workplace scrubs, to their credit, tend to be the best workers. They “perform” very well and expect “the meritocracy” to declare them the winners, and when the prizes are given to these other people who often “performed” poorly but were superior at manipulating others, they get angry.

Programmers, often, are the worst kinds of scrubs. Like Magic scrubs who prefer to play multiplayer solitaire (look at my 8/8 trampling Force of Nature!) over directly competing (and facing an opponent who directly competes with them, by countering key spells and denying resources) they often just want to be left alone and kept away from “politics”. They want to play multiplayer solitaire, not deal with adversity. This is understandable, but the adversity doesn’t go away.

One example of this is in the extreme professional omerta that afflicts the tech industry. Programmers are terrified of speaking up about bad practices by employers, and not for lack of a reason. Those who have blown whistles often find it harder to get jobs. But who enforces the black lists? Sadly, it’s often other programmers: the Uncle Toms who call the whistleblowers “not team players” or “rabble rousers”. And why? Employers and employees aren’t always in adversity, but often they are (and it’s usually the employer’s fault, but that’s another debate). To many, this truth is deeply unpleasant. The multiplayer solitaire players hate being reminded of it. They hate being told that they’d make three times as much as Java programmers (because they could change jobs every year and create bidding wars) than writing objectively better code in Haskell (where the community is small and changing jobs every 12-18 months isn’t an option). They hate being told that the quality of projects they get has more to do with political favor than any definition of “merit”, and that they ultimately work for people who are about as fit to evaluate our work as I am to perform surgery. They want to ignore all of the unpleasantness and focus on “the work” as they’ve defined it, and the game under the rules that they wish were the actual rules. When others blow whistles, they’re angered by the interruption and grab the wrenches that they’ll beat the messengers with.

Finally, we can’t cover this topic without discussing collective bargaining. Scrubs don’t enjoy losing, and they’re not always inept players, but the motto of the scrub is, “I don’t want to win that way.” Of course, if “that way” is cheating, then this is the only morally correct position; but often, “that way” includes strategies that are well within the rules. As a teenage Magic player, I didn’t want win through land destruction or counterspells, so I deemed that to be an “unacceptable” way to win (and consequently lost against players who knew what they were doing). Collective bargaining admits that the relationship between employers and employees is often adversarial and almost always at risk of becoming such. Since the work is viewed by those outside of our craft as a commodity, it mandates that commoditization happen in a way that is fair to all parties. Further, it renders us more resistant to divide-and-conquer tactics. However, a great number of Silicon Valley programmers bristle at the thought of creating any organization for ourselves whatsoever. Their response is, in essence, “I don’t want to win that way.”

I, too, would rather win in the classical “clean” way, which is by becoming a great programmer and writing excellent code that solves important problems. I’d love to live in a world in which employers and management had the best interests of everyone at heart. Unfortunately, we don’t live in a world where any of that is true. In the world that actually exists, with the rules as they actually are, we are losing. Technology is objectively run by the wrong people. We render obscene amounts of value to a class of overpaid, under-skilled ingrates that treats us as less than human, and our aversion to “getting political” keeps us from doing anything about it. Our aversion to what we view as a less-glorious path to victory leads us to a completely inglorious loss.

If I were to write a New Year’s Resolution for the 20 million programmers out there, it would be this: let’s stop being scrubs, and let’s start actually winning. Once we’re on top, our opinions about beautiful code and technical culture will start to matter, and we can fix the world then.

Things that failed: the individual mandate for health insurance

The Affordable Care Act (“Obamacare”) quite possibly did more good than harm, and it was enacted with good intentions, but it hasn’t caused health insurance premiums to decline. Instead, they’re going up far faster than inflation. This probably isn’t surprising anyone: the Satanic Trinity (healthcare, housing, and tuition expenses) have been an exponentially-growing choke pear in middle-class orifices for decades. However, it wasn’t supposed to happen.

It’s time to point out a failure: the individual mandate, or the requirement that everyone purchase health insurance. I predicted that it would fail. The problem with health insurance isn’t just one thing– a litigious society, lobbying and corruption and corporate greed, poor coverage, an artificially low supply of doctors due to AMA chicanery, overspending by drug companies and (increasingly) hospitals on marketing– but an “all of the above” problem. Shit’s complicated, and a legal requirement that people buy a shitty product doesn’t, all else being equal, make the product less shitty. It makes it more shitty and far more expensive. Add a $1,000 per year penalty for being uninsured and, guess what, health insurance premiums are going to increase by $1,000. This is what happens in practice, even though the theory says otherwise. According to theory, individual mandates bring premiums down by removing adverse selection from the insurance pool. Without an individual mandate, the young and healthy opt out (especially with the ridiculous premiums that exist now) and the sick buy insurance. This makes premiums high. The goal of an individual mandate is to make the insurance pool national and, because healthy people expected to have low health costs are now included, reduce the overall premium.

Despite the high premiums, health insurers offer shitty products. The “cover everything” plans are gone, and people who get seriously sick are going to be paying a large percentage of the costs out of pocket. Why does this exist? The truth about our medical system is that it has a strong similarity to American and European witch hunts in the 15th to 18th centuries. The primary motivation for witch hunting was economics: if a person (usually an elderly woman) was judged to be a witch, the property was forfeit and disbursed between the clergy and the successful prosecutor. Who tended to have considerable stored wealth without the strength or power to defend it? Old people, often women, who lived alone. So who were the favored targets of witch hunters? That same set of people. Witch hunting no longer exists, but that economic source (middle-class retirees who’ve amassed nest eggs in the hundreds of thousands of dollars) remains and there are plenty who wish to get at it. The least respectable elder-poachers are the telemarketers and matchstick men who prey on the lonely; more respectable, but targeted toward the same end, is our “healthcare-industrial complex”: the hospitals, insurance bureaucrats, and lobbyists who use “medical billing” as an excuse to get at that last $250,000 from a person who (often being at death’s door) is completely unable to defend it (and before the children are aware that it exists and that their inheritance is being swiped).

The concept of an individual mandate is telling, though. It shows that many young people choose not to buy health insurance. And why? It’s not just that it’s a shitty product, but indicative of something else. Medical bankruptcies have become a “yeah, that happens” phenomenon, but this is indicative of something more severe that has happened in the past 20 years: we’ve de-moralized personal finance. By this, I mean that it’s no longer embarrassing to be in debt, and that it’s no longer viewed as shameful or unethical when people take on debt that they can’t possibly repay, or take on debt and have no intent on paying it back. I actually see this as a severe long-term threat to the fabric of society. Don’t get me wrong: I’m glad that there are forgiving bankruptcy laws in order to give second (and third) chances to people who fuck up… and business bankruptcy is a different affair altogether, since good-faith business failure is fairly common. What I find disturbing is that the system has become so unfair and capricious– people start out with large “student” debts because of the protection racket run by organizations that have given themselves airtight access to the middle-class job market, and can end up with unpayable medical bills for the “crime” of having bodies in which cells occasionally divide the wrong way– that a large number of people under 35 no longer see financial failure as a failure to keep their word, but as mere bad luck. That’s not because there is anything wrong with them as people- there isn’t– but because it accurately reflect the world that previous generations have left for them. The Boomers have created such an anti-meritocracy that this sort of de-moralization of personal finance, as brutal as its effects on the economy (see: 2008) may be, makes sense.

There actually is one way to solve the adverse-selection problem in the health insurance pool, and a host of other issues as well. The right way to impose an individual mandate is to (a) tax people and (b) give them “free” healthcare that is paid-for by taxes. I prefer the single-payer strategy, but a public-option that competes these shit-house private insurers into oblivion would also work. It may have the strictly rhetorical weakness of being equated to “socialism”, but it has what, at least, deserves to be an advantage in debate: it would actually work.

Malfragmentation

Paul Graham has been saying a lot of dumb things, of late, and since he’s rich, those things get taken more seriously than they deserve. I’ve decided that his recent essay on “Refragmentation” is worth some kind of comment. He documents some noted historical changes that have been to his personal benefit, and quite accurately, then poses reasons for those changes that are self-serving and bizarre.

What is “refragmentation”? As far as I can tell, it’s the unwind of the organizational high era in the mid-20th century. In 1950, organizations were strong and respected. Large corporations were beloved, the U.S. government was held in high regard– it had just beaten the Nazis, after all– and people could expect lifelong employment at their companies. There was, in some way, a sense of national unity (unless you were black, or gay, or a woman who wanted more than the cookie-cutter housewife life) that some people miss in the present day. Economic inequality was low, and so was social inequality. Top students from public schools in Michigan actually could go to Harvard without getting recommendations from U.S. Senators. There are some things to miss about this era, and there’s quote a lot not to miss. I’d rather be in this time, but how that period is viewed may be somewhat irrelevant, because there’s no hope of going back to it.

In 2016, organizations are viewed as feeble, narrow-minded, and corrupt. We don’t really trust schools to teach, or our politicians to serve our interests rather than their own, and we certainly don’t trust our employers to look out for our economic interests. Unions and the middle-class jobs they’ve created have been on the wane for decades, and most of our Democrats are right-wingers compared to Eisenhower and even Nixon. In a time of organizational malaise, people burrow into small cultural islands that are mostly expressions of discontent. When I was a teenager, we had the “goths” and the “skaters” and the “nerds” and the “emo kids”. In adulthood, we have apocalyptic religious movements (the percentage of Americans who believe the End Times will occur in their lives is shockingly high) and anti-vaccine crusaders and wing-nuts from the left and the right. There’s good and bad in this. To the good, we’ve abandoned the notion that conformity is a virtue. To the bad, we’ve lost all sense of unity and all hope in our ability to solve collective problems, like healthcare. Rather than build a strong national system like Britain’s NHS, we’ve kept this hodge-podge of muck alive and, with an individual mandate to buy the lousy product on offer, made it worse and far more expensive.

Is this a refragmentation? Culturally, it appears that we’ve experienced one, and culturally, it’s arguably a good thing. Who wants to listen to the same 40 pop songs over and over? I don’t. The self-serve, find-your-own-tribe culture of the 2010s is certainly an improvement over the centralized one of, say, 1950s television.

There are benefits to “fragmentation”, which is why we see strength in systems that enable it. The United States began as a federalistic country with a weak national government, most powers left to the states, in order to allow experimentation and local sensibility rather than central mandate. There are also use cases that mandate unity and coordination. Ultimately, some compromise will be found. Case in point: time zones. Before the railroads were built, time was a local affair with major cities defining “12:00” as solar noon and smaller cities using the time of the nearest metropolis. This was, needless to say, a mess. It means that 11:00 in New York will be 10:37 in Pittsburgh and 11:12 in Boston, and who wants to keep track of all that? Time zones allowed each location to keep a locally relevant time, one usually within 30 minutes of what would be accurate, while imposing enough conformity to prevent total chaos, as would exist in the most fragmented policy toward time.

What I see in the corporate world, on the other hand, is malfragmentation. By this, I mean that there is a system that preserves the negatives of the old organizational high era, while losing its benefits.

This “worst of both worlds” dynamic shouldn’t be surprising, when one considers what post-1980 corporate capitalism really is. It’s neither capitalism nor socialism, but a hybrid that gives the benefits of both systems to a well-connected elite– a social elite often called “the 1 percent”, but arguably even smaller– and the negatives of each to the rest. Take air travel, for one example of this hybridization: we get Soviet quality of service and reliability, but capitalism’s price volatility and nickel-and-diming. Corporate life is much the same. Every corporation has a politburo called “management” whose job is to extract as much work as possible (“from each, according to his ability”) from the workers while paying as little as possible (“to each, according to his need”). Internally, these companies run themselves like command economies, with centrally-planned headcount allowances and top-down initiatives. Yet, the workers (unlike executives) are left completely exposed to the vicissitudes of the market, being laid off (or, in many of these sleazy tech companies that refuse to admit to a layoff, fired “for performance”) as soon as the organization judges it to be convenient. The global elite has managed to give itself capitalism’s upside and socialism’s protection against failure (with their connections, they will always be protected from their own incompetence) while leaving the rest of society with the detriments of the two systems. This chimeric merging of two ideologies or cultural movements is something that they’re good at. They’ve done it before. And so it is with malfragmentation.

Under malfragmentation, the working people are constantly divided. Sometimes the divisions are based on age or social class or (in the lower social classes) race, and sometimes they’re based on internal organizational factors, like the decision that one set of workers is “the star team” and that the rest are underperformers. To be blunt about it, workers are often at fault for their own fragmentation, since they’ll often create the separations on their own, without external help. Let’s use programming, since it’s what I’m most familiar with. You have open-plan jockeys and “brogrammers” who want to drive out the “old fogeys” who insist on doing things properly, you have flamewars on Twitter about whether technology is hostile or is not hostile toward minorities, you have lifelong learners complaining about philistines who don’t have side projects and stopped learning at 22 and the philistines whining about the lifelong learners attending too many conferences, you have Java programmers bashing Rubyists and vice versa, and so on. All of these tiresome battles distract us from fighting our real enemy: the colonizers who decided, at some point, that programmers should be treated as business subordinates. I’m not a fan of Java (on its technical merits) or brogrammers (ick) but let’s put that stuff aside and focus on the war that actually matters.

The fragmentation within programming culture suits the needs of our colonizers, because it prevents us from banding together to overthrow them. Ultimately, we don’t need the executive types; we could do their jobs easily and better than they do their jobs, and they’re not smart enough to do ours. Yet, with all of our cultural divisions and bike-shedding conflicts, we end up pulling each other down. Rather than face the enemy head-on, we cling to designations that make us superior (“I’m an $XLANG programmer, unlike those stupid $YLANG-ists”) and tacitly assert that the rest of us deserve to be lowly business subordinates. In the Philippines, this is given the name of “crab mentality”, referring to the tendency of trapped crabs in a bucket to be unable to escape because, as soon as one seems to be getting out, the others pull it back in. It’s absurd, self-defeating, and keeps us mired in mediocrity.

So what makes this malfragmentation rather than simply fragmentation? Our class enemies aren’t divided. They’re quite united. They share notes, constantly, whether about wages or which individuals to put on “union risk” blacklists that can make employment in Silicon Valley very difficult. Venture capitalists have created such a strong culture of co-funding, social proof, and note-sharing (“let’s make a list of the senior boys with the cutest butts!”) that each entrepreneur only gets one real shot at making his entree into the founder class. I’ve experienced much of this nonsense first-hand, such as when an investor in Quora (almost certainly associated with Y Combinator) threatened that Quora would become “unfundable” unless they banned me immediately from the site. (This was in retaliation toward my tongue-in-cheek invitation of Paul Graham to a rap battle.) The bad guys work together, and fragmentation is for the proles. That’s how malfragmentation works.

You see the malfragmentatory tendency within organizations, indicating that it might be some natural pattern of human behavior. The “protect our own” impulse is very strong, and many groups in authority prefer to “handle matters internally” (which sometimes means, “not at all”). The mid-2010s protests against police brutality have been sparked, in large part, due to a public that is fed up with police departments that seem willing to protect their worst officers. Corporate management is similar, both within and between companies. A negative reference from a manager is often fatal to one’s job candidacy, not because what was said in that reference is believed to be true (a rational person knows that it’s usually not) but because a person who scuffled with a manager is likely to be viewed negatively by other managers, even in different companies. Managers protect their own, and programmers are the opposite– almost too eager to rat each other out to management over tabs-versus-spaces nonsense– and that’s why programmers end up on the bottom.

Elites coalesce, and the proles fragment, and when this matter is brought up, skeptics accuse the person making this sort of statement as harboring a “Conspiracy Theory”. Now, here’s the thing about conspiracy. It exists. No, there’s no Illuminati and there’s no “room where they all meet”. That’s a fantasy. If there were a “room where they all meet”, one could plant a bomb in said room and liberate humanity from its shadowy overlords and icy manipulators… and that’s obviously not the case. The upper-case-C “Conspiracy” doesn’t exist, while lower-case-c conspiracies form and dissolve all the time. Of course, the people forming these don’t think of themselves as “conspirators”, because most of them don’t have any sense of right or wrong in the first place; to them, they’re just trading favors and working together. What we call “abuses of power”, they just call, “power”. Although they are individually far too selfish to pull off the grand Conspiracies of folklore, and there’s plenty of in-fighting within any elite, they’re more than capable when circumstances need them to work together to put down the proles.

It’s easy to understand why elites coalesce: they have something to defend, and their episodes of cooperation don’t require absurd loyalty to a “Conspiracy” when mere selfishness (the desire to stay within, or get further into, an in-crowd) suffices. Why, on the other hand, are proles driven toward fragmentation? Do their overseers deliberately encourage it? To some degree, that happens, but that sort of influence doesn’t seem to be needed. They’ll fragment on their own. This seems to happen because of a culture whose individualism is borne of a sort of social defeatism. We’ve given up on making the collective lot better, so we’ve accepted the low status of the worker as a permanent affair, and we fragment ourselves out of a desire for differentiation. We might accept that programmers in some other language “deserve” to be Scrum-drones on 0.05%, because we’re $XLANG programmers and so much smarter. We’ve given up on the idea that programming might deserve to be a genuine profession where Scrum-drones don’t even get in.

I’ve written before about how Paul Graham is bad for the world. The irony of his piece on “Refragmentation” is that he’s an ultimate source of malfragmentation. He has coalesced the startup world’s founder class, making it far easier for those included in the Y Combinator in-crowd to share social resources (such as contacts into the investor class) and how-to advice on pulling off the unethical business practices for which VC-funded startups are so well known. He decries the old establishment because he wasn’t a part of it, while proposing that the even-worse proto-establishment that has emerged in Silicon Valley is somehow superior because, on paper, it “looks” distributed and “fragmented”. There is an appearance of competition between sub-sectors of the elite that keeps the worker class from figuring out what’s really going on.

This ties in, ultimately, to something much larger than Paul Graham: the Silicon Valley brand. There’s a bill of pretty rotten goods being sold, in order to exploit the middle-class myth that achieving wealth requires starting one’s own business. Of course, achieving extreme wealth almost certainly does require that, but (a) most people would like to strive for reasonable comfort, first, and worry about wealth later, and (b) very few people (meaning, less than 1 percent) who start businesses achieve that status. Thus, a “tech founder” career is sold to people who don’t know any better. In the two-class (investors vs. everyone else) Silicon Valley, it was many founders who were the marks; but in the post-2000 three-class Valley (investors, founders, workers) the game is being played against the workers, with the founders’ assistance. Founders, even if they fail, are permitted to achieve moderate wealth (through acqui-hires and executive sinecures at their investors’ portfolio companies) as a “performance” bonus if they keep up the ruse; they are, in the post-modern meta-company of Silicon Valley, its middle managers. It’s the employees who are being conned, being told that they’re 2-4 years from entree into the founder class when, realistically, they’re about as likely to become “founder material” as they are to win the Powerball. Not only will writing great code never get a programmer introduced to investors, but it will encourage the founders to make sure such an introduction never happens, lest they lose him to another company, or to his own.

The malfragmented Silicon Valley has its worker class laboring under the illusion that they’re working for independent, emerging small businesses when, in fact, they’re working for one of the worst big companies– the Sand Hill Road investor class and the puppet-leader founder class– to have come along in quite a long time. It’s one that carries the negatives of old-style corporate oligopoly, but abandons the positives (such as the employer-side obligations of the old “social contract”). It’s unclear to me whether this ruse is sustainable, and I hope that it isn’t.

Corporate ageism is collective depression

This article crossed my transom recently. It’s about the difficulties that older (here, over 50) women face in finding work. Older men don’t have it easy, either, and in the startup world, it’s common for the ageism to start much earlier. Influential goofball Paul Graham famously said that 38 is “too old” to start a company, despite ample evidence to the contrary.

I’m 32, so I’m not “old” yet, by most definitions, but it’s time to speak bluntly about the age prejudice, especially in software. It’s stupid. To the extent that the ageism is intentional, it’s malevolent. One side benefit that Silicon Valley’s powerbroker gain from age discrimination is two-fold: the younger are easier to take advantage of, and the artificial time pressure put on these people by the ageist culture makes them doubly so. For the rest of us, it’s a raw deal. We know that people don’t lose the ability to program– and, much more importantly, to write good code– as they age. The ageism doesn’t come from technologists; it comes from our colonizing culture. It’s time to kill it. The first step is to recognize corporate ageism for what it is, and to understand why it exists.

I had a phase of my life, like many people of talent, where I spent far too much time studying “IQ” and intelligence testing. There’s a whole wad of information I could put here, but the relevant topic is age, and the truth is that no one really knows when humans peak. Barring dementia and related health problems, which I’ll get back to, the argument can be made for a “peak age” as early as 20 or as late as 70. That’s not so much because “intelligence” is subjective (it’s more objective than people want to admit) but because the curve in healthy individuals is just very flat in adulthood, meaning that measurement will be dominated by random “noise”. Of course, some individuals peak early and some peak late; in the arts and mathematics, there are those who did their best work before 25 and others who did their best work in old age, but the overall average shows a rather flat profile for intellectual capability in adulthood.

In this case, why is there ageism in Corporate America? If intellectual ability isn’t supposed to decline, and one’s amount of experience only increases, shouldn’t older workers be the most desired ones? This analysis discounts one factor, which I find to be a common but under-acknowledged player: depression. And depression certainly can (if temporarily) impede creativity and reduce observable intelligence.

Midlife depression (possibly subclinical) seems to be a natural byproduct of the corporate game. The winners are exhausted and (excluding the born psychopaths, who might be immune to this effect) disgusted by the moral compromise required to gain their victories. The losers are demoralized and bitter. This is utterly predictable, because the harem-queen game, as played for millennia, is largely driven by the objective of making one’s opponents too depressed to continue in the competition. Even in the 21st century, when there’s no rational reason why office work should be stressful (it doesn’t improve the quality of the work) we see human nature driving toward this effect. The end result is that corporate midlifers tend, as a group, to be bitter, burned-out, defensive and miserable.

This isn’t immutable or natural. I can find absolutely no evidence of a natural reason why midlife, viewed positively in other cultures, would have such a high rate of burnout and depression. Yet, that such a thing exists, I would argue, is observably true. Most people over 40 in tech, excluding those in executive roles and those in elite programmer positions (which are more like R&D jobs, and usually entail VP/Director-equivalent titles) are miserable to be there. Does this merit not hiring such people? I doubt it. People can change radically according to context, and negative experiences seem more likely to strengthen people in the long run than to deplete them (even if the short-term effect is otherwise). Having dealt with mood disorders myself, I don’t stigmatize negative moods or view people as inferior for being unhappy sometimes. (In fact, the American social requirement to be, or to seem, constantly happy is one that I find utterly obnoxious. Fuck that shit in its eye.) I’d rather hire a 45-year-old who’s been burned out and miserable, and gotten through it, than a happy, wealthy 22-year-old who’s never felt adversity… but, alas, I’m not most people.

Corporate America is sometimes decried for “worshipping youth”. I don’t agree. Well-connected, rich kids get a different experience, but an average 22-year-old is not going to be tapping the benefits of fresh-faced youth. Instead, he’s going to be assigned the lowliest of the grunt work and given the benefit of the doubt most rarely. Ageism hurts the young and the old, and arguably has the flavor (if not direct influence) of a divide-and-conquer dynamic encouraged by the corporate owners– to keep the workers slugging each other over something as meaningless as when one was born, instead of working together to overthrow the owning class. Corporate America despises youth and age in different ways. Excluding the progeny of the well-connected “1 percent”, who get shunted into protege roles, the young are distrusted and their motives are constantly questioned. Pushing back against an unreasonable request is taken as an expression of “entitlement” and a young worker who arrives late is assumed to have been drinking the night before, rather than having an acceptable, adult reason (kids, commute, elder care, illness) for the lateness. If there is a group, in the corporate world, that is more despised among the young and the old, it’s clearly the young. The ageism of the corporate world toward older workers, instead, is more an acknowledgement of what that world does to people. It burns them out, leading to midlife depression (again, often subclinical) being common enough that even highly talented older workers struggle to overcome the associated stigma with their age. The corporate world knows that 20 years of residence within it causes depression and (almost certainly, temporary) cognitive decline. While this decline would probably be completely and quickly reversible by improving the context (that is, by investing in a better culture), that is a change that very few companies are willing to make.

The corporate world has decided to view “too much” experience negatively. That doesn’t apply only to chronological age. It can also apply to “too many jobs” or “having seen too much” or having failed before. Why is that? Why do negative and even average experiences (that might, in a less fucked-up culture, be viewed as a source of wisdom) carry a stigma? I can’t answer that for sure, but I think that a major part of it is that we, as a culture, aren’t merely depressed individually (for some people) in midlife. It’s deeper. We’re depressed about midlife, and about aging, and therefore about the future. We’re depressed because we’ve accepted a system that inflicts needless depression and anxiety on people, and that probably wouldn’t be any less economically productive without those side effects. We’re depressed because our extremist individualism leaves us seeing a future of near-term demise and our nihilism leaves us convinced (despite scant evidence either way) that there can be nothing after one’s own physical death. This leads us to tolerate a corporate miasma that depletes people, purposelessly, because we view emotional and intellectual decline in midlife as “normal”, when it very much isn’t.

Amid the shuffling stupidity of private-sector bureaucracy, there are flashes of insight. While I find corporate ageism morally reprehensible on many levels, there is a small degree of correctness in it. Corporate residency harms and depletes people, often delivering no benefit to company or person, because that is the nature of humans when locked in a certain type of competition for resources. Ageism is the corporate system’s rejection of the experiences it provides, and therefore an acknowledgment by Corporate America that it is parasitic and detrimental. Entities with pride tend to value (and sometimes over-value, but that’s another debate) the experiences that they’ve produced for people, and the corporate world’s tendency toward the opposite shows an admissible lack of pride. One could argue that it, itself, lives under the fog of status anxiety, nihilism, and depression that it creates for those who live within it.

Insights into why a contravariant type can’t be a Haskell Functor

In learning Haskell’s core types, type classes, and concepts, one often finds counterexamples useful in learning what these abstract concepts “really are”. Perhaps one of the most well-understood type class hierarchies is Functor-Applicative-Monad. We encounter Functor in the context of things that are “mappable”. It’s a producer, and using fmap we can “post-transform” all of the things that it produces in a way that preserves any underlying structure.

We quickly become familiar with a number of useful functors, like Maybe, [], (->) r, IO. Functors seem to be everywhere. So, then, one asks, what might be a parameterized type that’s not a functor? The go-to example is something that’s contravariant in its type parameter, like a -> Int, as a type-level function of parameter a. There doesn’t seem to be any useful way to make it a Functor in a. That said, seemingly “useless” type-level artifacts are often quite useful, like the Const Functor, defined as Const r a ~= r (the a is a phantom type and functions being fmaped are ignored). So usefulness in this world isn’t always intuitive. Still, it remains true that something like a -> Int is not a functor? Why?

Let’s work with the type a -> Bool, just because it’s the simplest example of what goes wrong. Is there a way to make a Functor instance for that? To me, it’s not intuitive that that thing isn’t a Functor. It’s “almost” a set (stepping around, for a moment, debates about what is a set) and Set, meaning the actual collection type in Data.Set is “almost” a Functor. It’s not one, because a Functor demands that the mapping be structure-preserving, which a set’s notion of mapping is not (for example, if you map the squaring function on the set {2, -2}, you get a smaller set, {4}). There are many ways to get at the point that Set is not a Functor, most relying on the fact that Set a‘s methods almost invariably require Eq a and Ord a. But what about the type-agnostic “(<-) a” (my notation) above? It places no such constraints on a.

To answer this, let’s try to create the method, fmap, on which Functor relies.


-- pretend Haskell supports (<-) b a as a syntax for (->) a b = a -> b

instance Functor (<-) Bool where

-- fmap :: (a -> b) -> f a -> f b
-- in this case, (a -> b) -> (a -> Bool) -> (b -> Bool)
fmap f x = ??

The type signature that we’re demanding of our fmap is an interesting one: (a -> b) -> (a -> Bool) -> (b -> Bool). Notice that such a function doesn’t have any a to work with. One might not exist: the Functor needs to be valid over empty types (e.g. Void; note that [Void] is not empty but has exactly one element, []). In typical Functor cases, the a = Void case is handled soundly. For example, consider the list case: ([] :: [Void] maps to [] :: [b] for any b), and any non-empty list has as to work with. In this case, fmap‘s type signature gives us two things that we can do with a‘s but no a. This means that we have to ignore the first two arguments; we can’t use them.

instance Functor (<-) Bool where

-- fmap :: (a -> b) -> f a -> f b
-- in this case, (a -> b) -> (a -> Bool) -> (b -> Bool)
fmap _ _ = (?? :: b -> Bool)

This rules out any useful Functors. In fact, our need for an expression that inhabits b -> Bool for any b limits us to two non-pathological possibilities: the constant functions! Since we don’t have any b to work with, either, we have to commit to one of those. Without loss of generality, I’m going to use the definition fmap _ _ = const True. With a few tweaks to what I’ve done, Haskell will actually allow such a “Functor” instance to be created. But will it be right? It turns out that it won’t be. To show this, we consult the laws for the Functor type class. In fact, we only need the simplest one (Identity): fmap id === id to refute this one. That law is violated by the constant “Functor” above:

fmap id (const True) = const True -- OK!
fmap id (const False) = const True -- whoops!

So it turns out that (<-) Bool (my notation) has no Functor instances at all (the only properly polymorphic candidate fails the type class’s most basic law) and the more complicated cases fail similarly.