Java is Magic: the Gathering (or Poker) and Haskell is Go (the game)

It may be apocryphal, but there’s a parable in the Go (for this essay, I will never refer to Google’s programming language, so I’m talking about the ancient board game) community in which a strong player boasts about his victory over a well-known professional player, considered one of the best in the world. He said, “last month I finally beat him– by two points!” His conversation partner, also a Go player, is unimpressed. She says, “I’ve also played him, and I beat him by one point.” Both acknowledge that her accomplishment is superior. The best victory is a victory with control, and control to a margin of one point is the best.

Poker, on the other hand, is a game in which ending a night $1 up is not worthy of mention, unless the bidding increment is measured in pennies. The noise in the game is much greater. The goal in Poker is to win a lot of money, not to come out slightly ahead. Go values an artful, subtle victory in which a decision made fifty moves back suffices to bring the one-point advantage that delivers a game. Poker encourages obliterating the opponents. Go is a philosophical debate where one side wins but both learn from the conversation. Poker is a game where the winner fairly, ethically, and legally picks the loser’s pocket.

Better yet, I could invoke Magic: the Gathering, which is an even better example for this difference in what kinds of victories are valued. Magic is a duel in which there are an enormous number of ways to humiliate your opponent: “burn decks” that enable you to do 20 points of damage (typically, a fatal sum) in one turn, “weenie decks” that overrun him with annoying creatures that prick him to death, land and hand destruction decks that deprive him of resources, and counterspell decks that put everything the opponent does at risk of failure. There are even “decking decks” that kill the opponent slowly by removing his cards from the game. (A rarely-triggered losing condition in Magic is that a player unable to draw a card, because his active deck or “library” has been exhausted, loses.) If you’re familiar with Magic, then think of Magic throughout this essay; otherwise, just understand that (like Poker) it’s a very competitive game that usually ends with one side getting obliterated.

If it sounds like I’m making an argument that Go is good or civilized and that Magic or Poker are barbaric or bad, then that’s not my intention because I don’t believe that comparison to make sense, nor am I implying that those games are bad. The fun of brutal games is that they humiliate the loser in a way that is (usually) fundamentally harmless. The winner gets to be boastful and flashy; the loser will probably forget about it, and certainly live to play again. Go is subtle and abstract and, to the uninitiated, impenetrable. Poker and Magic are direct and clear. Losing a large pot on a kicker, or having one’s 9/9 creature sent to an early grave with a 2-mana-cost Terror spell, hurts in a way that even a non-player, unfamiliar with the details of the rules, can observe. People play different games for different reasons, and I certainly don’t consider myself qualified to call one set of reasons superior over any other.


Ok, so let’s talk about programming. Object-oriented programming is much like Magic: there are so many optional rules and modifications, often contradicting, available. There are far too many strategies for me to list them here and do them justice. Magic, just because its game world is so large, has inevitable failures of composition: cards that are balanced on their own but so broken in combination that one or the other must be banned by Magic‘s central authority. Almost no one alive knows “the whole game” when it comes to Magic, because there are about twenty thousand different cards, many introducing new rules that didn’t exist when the original game came out, and some pertaining to rules that exist only on cards written in a specific window of time. People know local regions of the game space, and play in those, but the whole game is too massive to comprehend. Access to game resources is also limited: not everyone can have a Black Lotus, just as not everyone can convince the boss to pay them to learn and use a coveted and highly-compensated but niche technology.

In Magic, people often play to obliterate their opponents. That’s not because they’re uncivilized or mean. The game is so random and uncontrollable (as opposed to Go, with perfect information) that choosing to play artfully rather than ruthlessly is volunteering to lose.

Likewise, object-oriented programmers often try to obliterate the problem being solved. They aren’t looking for the minimal sufficient solution. It’s not enough to write a 40-line script that does the job. You need to pull out the big guns: design patterns that only five people alive actually understand (and, for which, 4 of those 5 have since decided that they were huge mistakes). You need to have Factories generating Factories, like Serpent Generators popping out 1/1 Serpent counters. You need to use Big Products like Spring and Hibernate and Mahout and Hadoop and Lucene regardless of whether they’re really necessary to solve the problem at hand. You need to smash code reviews with “-1; does not use synchronized” on code that will probably never be multi-threaded, and you need to build up object hierarchies that would make Lord Kefka, the God of Magic from Final Fantasy VI, proud. If your object universe isn’t “fun”, with ZombieMaster classes that immediately increment values of fields in all Zombies in the heap in their constructors and decrement those same fields in their finalizers, then you’re not doing OOP– at least, as it is practiced in the business world– right, because you’re not using any of the “fun” stuff.

Object-oriented programmers play for the 60-point Fireballs and for complex machinery. The goal isn’t to solve the problem. It’s to annihilate it and leave a smoldering crater where that problem once stood, and to do it with such impressive complexity that future programmers can only stand in awe of the titanic brain that built such a powerful war machine, one that has become incomprehensible even to its creator.

Of course, all of this that I am slinging at OOP is directed at a culture. Is object-oriented programming innately that way? Not necessarily. In fact, I think that it’s pretty clear Alan Kay’s vision (“IQ is a lead weight”) was the opposite of that. His point was that, when complexity occurs, it should be encapsulated behind a simpler interface. That idea, now uncontroversial and realized within functional programming, was right on. Files and sockets, for example, are complex beasts in implementation, but manageable specifically because they tend to conform to simpler and well-understood interfaces: you can read without having to care whether you’re manipulating a robot arm in physical space (i.e. reading a hard drive) or pulling data out of RAM (memory file) or taking user input from the “file” called “standard input”. Alan Kay was not encouraging the proliferation of complex objects; he was simply looking to build a toolset that enables to people to work with complexity when it occurs. One should note that major object-oriented victories (concepts like “file” and “server”) are no longer considered “object-oriented programming”, just as “alternative medicine” that works is recognized as just “medicine”.

In opposition to the object-oriented enterprise fad that’s losing air but not fast enough, we have functional programming. I’m talking about Haskell and Clojure and ML and Erlang. In them, there are two recommended design patterns: noun (immutable data) and verb (referentially transparent function) and because functions are first-class citizens, one is a subcase of the other. Generally, these languages are simple (so simple that Java programmers presume that you can’t do “real programming” in them) and light on syntax. State is not eliminated, but the language expects a person to actively manage what state exists, and to eliminate it when it’s unnecessary or counterproductive. Erlang’s main form of state is communication between actors; it’s shared-nothing concurrency. Haskell uses a simple type class (Monad) to tackle head-on the question of “What is a computational effect?”, one that most languages ignore. (The applications of Monad can be hard to tackle at first, but the type class itself is dead-boring simple, with two core methods, one of which is almost always trivial.) While the implementations may be very complex (the Haskell compiler is not a trivial piece of work) the computational model is simple, by design and intention. Lisp and Haskell are languages where, as with Go or Chess, it’s relatively easy to teach the rules while it takes time to master good play.

While the typical enterprise Java programmer looks for an excuse to obliterate a simple ETL process with a MetaModelFactory, the typical functional programmer tries to solve almost everything with “pure” (referentially transparent) functions. Of course, the actual world is stateful and most of us are, contrary to the stereotype of functional programmers, quite mature about acknowledging that. Working with this “radioactive” stuff called “state” is our job. We’re not trying to shy away from it. We’re trying to do it right, and that means keeping it simple. The $200/hour Java engineer says, “Hey, I bet I could use this problem as an excuse to build a MetaModelVisitorSingletonFactory, bring my inheritance-hierarchy-record into the double-digits, and use Hibernate and Hadoop because if I get those on my CV, I can double my rate.” The Haskell engineer thinks hard for a couple hours, probably gets some shit during that time for not seeming to write a lot of code, but just keeps thinking… and then realizes, “that’s just a Functor“, fmaps out a solution, and the problem is solved.

While not every programmer lives up to this expectation at all times, functional programming values simple, elegant solutions that build on a small number of core concepts that, once learned, are useful forever. We don’t need pre-initializers and post-initializers; tuples and records and functions are enough for us. When we need big guns, we’ve got ’em. We have six-parameter hyper-general type classes (like Proxy in the pipes library) and Rank-N types and Template Haskell and even the potential for metaprogramming. (Haskell requires the program designer to decide how much dynamism to include, but a Haskell program can be as dynamic as is needed. A working Lisp can be implemented in a few hundred lines of Haskell.) We even have Data.Dynamic in case one absolutely needs dynamic typing within Haskell. If we want what object-oriented programming has to offer, we’ll build it using existential types (as is done to make Haskell’s exception types hierarchical, with SomeException encompassing all of them) and Template Haskell and be off to the races. We rarely do, because we almost never need it, and because using so much raw power usually suggests a bad design– a design that won’t compose well or, in more blunt terms, won’t play well with others.

The difference between games and programming

Every game has rules, but Games (as a concept) has no rules. There’s no single principle that unifies games that each game must have. There are pure-luck games and pure-skill games, there are competitive games and cooperative games (where players win or lose as a group). There are games without well-defined objective functions. There are even games where some players have objective functions and some don’t, as with 2 Rooms and a Boom‘s “Drunk” role. Thus, there isn’t an element of general gameplay that I can single out and say, “That’s bad.” Sometimes, compositional failures and broken strategies are a feature, not a bug. I might not like Magic‘s “mana screw” (most people consider it a design flaw) but I could also argue that the intermittency of deck performance is part of what makes that game addictive (see: variable-schedule reinforcement, and slot machines) and that it’s conceivable that the game wouldn’t have achieved a community of such size had it not featured that trait.

Programming, on the other hand, isn’t a game. Programs exist to do a job, and if they can’t do that job, or if they do that job marginally well but can never be improved because the code is incomprehensible, that’s failure.

In fact, we generally want industrial programs to be as un-game-like as possible. (That is not to say that software architects and game designers can’t learn from each other. They can, but that’s another topic for another time.) The things that make games fun make programs infuriating. Let me give an example: NP-complete problems are those where checking a solution can be done efficiently but finding a solution, even at moderate problem size, is (probably) intractable. Yet, NP-complete (and harder) problems often make great games! Go is PSPACE-complete, meaning that it’s (probably) harder than NP-complete, so exhaustive search will most likely never be an option. So is Microsoft’s addictive puzzle game Minesweeper. Tetris and Sudoku are likewise computationally hard. (Chess is harder, in this way, to analyze, because computational hardness is defined in terms of asymptotic behavior and there’s no incontrovertibly obvious way to generalize it beyond the standard-issue 8-by-8 board.) It doesn’t have to be such a way, because human brains are very different from computers, and so there’s no solid reason why a game’s NP-completeness (or lack thereof) would bear on its enjoyability to humans, yet the puzzle games that are most successful tend to be the ones that computers find difficult. Games are about challenges like computational difficulty, imperfect information (network partitions), timing-related quirks (“race conditions” in computing), unpredictable agents, unexpected strategic interactions and global effects (e.g. compositional failures), and various other things that make a human social process fun, but often make a computing system dangerously unreliable. We generally want games to have traits that would be intolerable imperfections in any other field of life. The sport of Soccer is one where one’s simulated life depends on the interactions between two teams and a tiny ball. Fantasy role-playing games are about fighting creatures like dragons and beholders and liches that would cause us to shit our pants if we encountered them on the subway because, in real life, even a Level 1 idiot with a 6-inch knife is terrifying.

When we encounter code, we often want to reason about it. While this sounds like a subjective goal, it actually has a formal definition. The bad news: reasoning about code is mathematically impossible. Or, more accurately, to ask even the simplest questions (“does it terminate?” “is this function’s value ever zero?”) about an arbitrary program in any Turing-complete language (as all modern programming languages are) is impossible. We can write programs for which it is impossible to know what they do, except empirically, and that’s deeply unsatisfying. If we run a program that fails to produce a useful result for 100 years, we still cannot necessarily differentiate between a program that produces a useful result after 100.1 years and one that loops forever.

If the bad news is that reasoning about arbitrary code is impossible, the good news is that humans don’t write arbitrary code. We write code to solve specific problems. Out of the entire space of possible working programs on a modern machine, less than 0.000000001 percent (with many more zeros) of possible programs are useful to us. Most syntactically correct programs generate random garbage, and the tiny subspace of “all code” that we actually use is much more well-behaved. We can create simple functions and effects that we understand quite well, and compose them according to rules that are likewise well-behaved, and achieve very high reliability in systems. That’s not how most code is actually written, especially not in the business world, the latter being dominated by emotional deadlines and hasty programming. It is, however, possible to write specific code that isn’t hard to reason about. Reasoning about the code we actually care about is potentially possible. Reasoning about randomly-generated syntactically correct programs is a fool’s errand and mathematically impossible to achieve in all cases, but we’re not likely to need to do that if we’re reading small programs written with a clear intention.

So, we have bad news (reasoning about arbitrary code is formally impossible) and good news (we don’t write “arbitrary code”) but there’s more bad news. As software evolves, and more programmers get involved, all carrying different biases about how to do things, code has a tendency to creep toward “arbitrary code”. The typical 40-year-old legacy program doesn’t have a single author, but tens or hundreds of people who were involved. This is why Edsger Dijkstra declared the goto statement to be harmful. There’s nothing mathematically or philosophically wrong with. In fact, computers use it in machine code all the time, because that’s what branching is, from a CPU’s perspective. The issue is the dangerous compositional behavior of goto— you can drop program control into a place where it doesn’t belong and get nonsensical behavior– combined with the tendency of long-lived, multi-developer programs using goto to “spaghettify” and reach a state where it is incomprehensible, reminiscent of a randomly-generated (or, worse yet, “arbitrary” under the mathematician’s definition) program. When Dijkstra came out against goto, his doing so was as controversial as anything that I might say about the enterprise version of object-oriented programming today– and yet, he’s now considered to have been right.

Comefrom 10

Where is this whole argument leading? First, there’s a concept in game design of “dryness”. A game that is dry is abstract, subtle, generally avoiding or limiting the role of random chance, and while the game may be strategically deep, it doesn’t have immediate thematic appeal. Go is a great game, and it’s also very dry. It has white stones and black stones and a board, but that’s it. No wizards, no teleportation effects, not even castling. You put a stone on the board and it sits there forever (unless the colony is surrounded and it dies). Go also values control and elegance, as programmers should. We want our programs to be “dry” and boring. We want the problems that we solve to be interesting and complex, but the code itself should be so elegant as to be “obvious”, and elegant/obvious things are (in this way) “boring”. We don’t want that occurrence where a ZombieMaster comes into play (or the heap) and causes all the Zombies to have different values in otherwise immutable fields. That’s “fun” in a game, where little is at stake and injections of random chance (unless we want a very-dry game like Go) are welcome. It’s not something that we want in our programs. The real world will throw complexity and unpredictability at us: nodes in our networks will fail, traffic will spike, and bugs will occur in spite of our best intentions. The goal of our programs should be to manage that, not to create more of it. The real world is so damn chaotic that programming is fun even when we use the simplest, most comprehensible, “dryest” tools like immutable records and referentially transparent functions.

So, go forth and write more functions and no more SerpentGeneratorGenerators or VibratorVisitorFactory patterns.

Academia, the Prisoner’s Dilemma, and the fate of Silicon Valley

In 2015, the moral and cultural failure of American academia is viewed as a fait accompli. The job market for professors is terrible and will remain so. The academy has sold out two generations already, and shows no sign of changing course. At this point, the most prominent function of academia (as far as the social mainstream is concerned) isn’t to educate people but to sort them so the corporate world knows who to hire. For our society, this loss of academia is a catastrophe. Academia has its faults, but it’s too important for us to just let it die.

To me, the self-inflicted death of academia underlies the importance of social skills. Now, I’m one of those people who came up late in terms of social interaction. I didn’t prioritize it, when I was younger. I focused more on knowledge and demonstration of intelligence than on building up my social abilities. I was a nerd, and I’m sure that many of my readers can relate to that. What I’ve learned, as an adult, is that social skills matter. (Well, duh?) If you look at the impaired state that academia has found itself in, you see how much they matter.

I’m not talking about manipulative social skills, nor about becoming popular. That stuff helps an individual in zero-sum games, but it doesn’t benefit the collective or society at large. What really matters is a certain organizational (or, to use a term I’ll define later, coordinative) subset of social skills that, sadly, isn’t valued by people like academics or software engineers, and both categories suffer for it.


How did academia melt down? And why is it reasonable to argue that academics are themselves at fault? To make it clear, I don’t think that this generation of dominant academics is to blame. I’d say that academia’s original sin is the tenure system. To be fair, I understand why tenure is valuable. At heart, it’s a good idea: academics shouldn’t lose their jobs (and, in a reputation-obsessed industry, such a loss often ends their careers) because their work pulls them in a direction disfavored by shifting political winds. The problem is that tenure allowed the dominant, entrenched academics to adopt an attitude– research über alles— that hurt the young, especially in the humanities. Academic research is genuinely useful, whether we’re talking about particle physics or medieval history. It has value, and far more value than society believes that it has. The problem? During the favorable climate of the Cold War, a generation of academics decided that research was the only part of the job that mattered, and that teaching was grunt work to be handed off to graduate students or minimized. Eventually, we ended up with a system that presumed that academics were mainly interested in research, and that therefore devalued teaching in the evaluation of academics, so that even the young rising academics (graduate students and pre-tenure professors) who might not share this attitude still had to act according to it, because the “real work” that determined their careers was research.

The sciences could get away with the “research über alles” attitude, because intelligent people understand that scientific research is important and worth paying for. If someone blew off Calculus II but advanced the state of nuclear physics, that was tolerated. The humanities? Well, I’d argue that the entire point of humanities departments is the transmission of culture: teaching and outreach. So, while the science departments could get away with a certain attitude toward their teaching and research and the relative importance of each– a “1000x” researcher really is worth his keep even if he’s a terrible teacher– there was no possible way for humanities departments to pull it off.

To be fair, not every academic individually feels negatively about teaching. Many understand its importance and wish that it were more valued, and find it upsetting that teaching is so undervalued, but they’re stuck in a system where the only thing that matters, from a career perspective, is where they can get their papers published. And this is the crime of tenure: the young who are trying to enter academia are suffering for the sins of their (tenured, safe) predecessors.

Society responded to the negative attitude taken toward teaching. The thinking was: if professors are so willing to treat teaching as commodity grunt work, maybe they’re right and it is commodity grunt work. Then, maybe we should have 300 students in a class and we should replace these solidly middle-class professorships with adjunct positions. It’s worth pointing out that adjunct teaching jobs were never intended to be career jobs for academics. The purpose of adjunct teaching positions was to allow experienced non-academic practitioners to promote their professional field and to share experience. (The low salaries reflect this. These jobs were intended for successful, wealthy professionals for whom the pay was a non-concern.) They were never intended to facilitate the creation of an academic underclass. But, with academia in such a degraded state, they’re now filled with people who intended to be career academics.

Academia’s devolution is a textbook case of a prisoner’s dilemma. The individual’s best career option is to put 100% of his focus on research, and to do the bare minimum when it comes to teaching. Yet, if every academic does that, academia becomes increasingly disliked and irrelevant, and the academic job market will be even worse for the next cohort. The health of the academy requires a society in which the decision-makers are educated and cultured (which we don’t have). People won’t continue to pay for things that seem unimportant to them, because they’ve never been taught them. So, in a world where even most Silicon Valley billionaires can’t name seven of Shakespeare’s plays and many leading politicians couldn’t even spell the playwright’s name, what should we expect other than academia’s devolution?

Academia still exists, but in an emasculated form that plays by the rules of the corporate mainstream. Combining this with the loss of vision and long-term thinking in the corporate world (the “next quarter” affliction) we have a bad result for academia and society as a whole. Those first academics who created the “research über alles” culture doomed their young to a public that doesn’t understand their value, declining public funding, adjunct hell and second and third post-docs. With the job market in tatters, professors became increasingly beholden to corporations and governments for grant money, and intellectual conformism increased.

I am, on a high level, on the side of the academics. There should be more jobs for them, and they should get more respect, and they’re suffering for an attitude that was copped by their privileged antecedents in a different time, with different rules. A tenured professor in the 1970s had a certain cozy life that might have left him feeling entitled to blow off his teaching duties. He could throw 200 students into an auditorium, show up 10 minutes late, make it obvious that he felt he had better things to do than to teach undergraduates… and it really didn’t matter to him that one of those students was a future state senator who’d defund his university 40 years later. In 2015, hasty teaching is more of an effect of desperation than arrogance, so I don’t hold it against the individual academic. I also believe that it is better to fix academia than to write it off. What exists that can replace it? I don’t see any alternatives. And these colleges and universities (at least, the top 100 or so most prestigious ones) aren’t going to go away– they’re too rich, and Corporate America is too stingy to train or to sort people– so we might as well make them more useful.

The need for coordinative action

Individuals cannot beat a Prisoner’s Dilemma. Coordination and trust are required in order to get a positive outcome. Plenty of academics would love to put more work into their teaching, and into community outreach and other activities that can increase the relevance and value assigned to their work, but they don’t feel like they’ll be able to compete with those who put a 100% focus on research and publication (regardless of the quality of the research, because getting published is what matters). And they’re probably right. They’re in a broken system, and they know it, but opposing it is individually so damaging, and the job market is so competitive, that almost no one can do anything but the individually beneficial action.

Academic teaching suffers from the current state of affairs, but the quality of research is impaired as well. It might have made sense, for individual benefit, for a tenured academic in the 1970s to blow off teaching. But this, as I’ve discussed, only led society to undervalue what was supposed to be taught. The state now for academia has become so bad that researchers spend an ungodly amount of time begging for money. Professors spend so much time in fundraising that many of them no longer perform research themselves; they’ve become professional managers who raise money and take credit for their graduate students’ work. To be truthful, I don’t think that this dynamic is malicious on the professors’ part. It’s pretty much impossible to put yourself through the degrading task of raising money and to do creative work at the same time. It’s not that they want to step back and have graduate students do the hard work; it’s that most of them can’t, due to external circumstances that they’d gladly be rid of.

If “professors” were a bloc that could be ascribed a meaningful will, it’s possible that this whole process wouldn’t have happened. If they’d perceived that devaluing teaching in the 1970s would lead to an imploded job market and funding climate two decades later, perhaps they wouldn’t have made the decisions that they did. Teach now, or beg later. Given that pair of choices, I’ll teach now. Who wouldn’t? In fact, I’m sure that many academics would love to put all the time and emotional energy wasted on fundraising into their teaching, instead, if it would solve the money problem now instead of 30 years from now. The tenure system that allowed a senior generation of academics to run up a social debt and hand their juniors the bill, and academia’s stuck in a shitty situation that it can’t work its way out of. So what can be done about it?

Coordinative vs. manipulative social skills

It’s well-understood that academics have poor social skills. By “well understood”, I don’t mean that it’s necessarily true, but it’s the prevailing stereotype. Do academics lack social skills? In order to answer this question, I’m going to split “social skills” up into three categories. (There are certainly more, and these categories aren’t necessarily mutually exclusive.) The categories are:

  • interpersonal: the ability to get along with others, be well-liked, make and keep friends. This is what most people think of when they judge another person’s “social skills”.
  • coordinative: the ability to resolve conflicts and direct a large group of people toward a shared interest.
  • manipulative: the ability to exploit others’ emotions and get them to unwittingly do one’s dirty work.

How do academics stack up in each category? I think that, in terms of interpersonal social skills, academics follow the standard trajectory of highly intelligent people: severe social difficulty when young that is worst in the late teens, and resolves (mostly) in the mid- to late 20s. Why is this so common a pattern? There’s a lot that I could say about it. (For example, I suspect that the social awkwardness of highly intelligent people is more likely to be a subclinical analogue of a bipolar spectrum disorder than a subclinical variety of autism/Asperger’s.) Mainly, it’s the 20% Time (named in honor of Google) Effect. That 10 or 20 percent social deficit (whether you attribute it to altered consciousness, via a subclinical bipolar or autism-spectrum disorder, or whether you attribute it to just having other interests) that is typical in highly intelligent people is catastrophic in adolescence but a non-issue in adulthood. A 20-year-old whose social maturity is that of a 17-year-old is a fuckup; a 40-year-old with the social maturity of a 34-year-old would fit in just fine. Thus, I think that, by the time they’re on the tenure track (age 27-30+) most professors are relatively normal when it comes to interpersonal social abilities. They’re able to marry, start families, hold down jobs, and create their own social circles. While it’s possible that an individual-level lack of interpersonal ability (microstate) is the current cause for the continuing dreadful macrostate that academia is in, I doubt it.

What about manipulative social skills? Interpersonal skills probably follow a bell curve, whereas manipulative social skill seems to have a binary distribution: you have civilians, who lack them completely, and you have psychopaths, who are murderously good at turning others into shrapnel. Psychopaths exist, as everywhere, in academia, and they are probably not appreciably less or more common than in other industries. Since academia’s failure is the result of a war waged on it by external forces (politicians devaluing and defunding it, and corporations turning it toward their own coarser purposes) I think it’s unlikely that academia is suffering from an excess of psychopaths within its walls.

What academia is missing is coordinative social skill. It has been more than 30 years since academia decided to sell out its young, and the ivory tower has not managed to fix its horrendous situation and reverse the decline of its relevance. Academia has the talent, and it has the people, but it doesn’t have what it takes to get academics working together to fight for their cause, and to reward the outreach activities (and especially teaching) that will be necessary if academia wants to be treated as relevant, ever again.

I think I can attribute this lack of coordinative social skill to at least two sources. The first is an artifact of having poor interpersonal skills in adolescence, which is when coordinative skills are typically learned. This can be overcome, even in middle or late adulthood, but it generally requires that a person reach out of his comfort zone. Interpersonal social skills are necessary for basic survival, but coordinative social skills are only mandatory for people who want to effect change or lead others, and not everyone wants that. So, one would expect that some number of people who were bad-to-mediocre, interpersonally, in high school and college, would maintain a lasting deficit in coordinative social skill– and be perfectly fine with that.

The second is social isolation. Academia is cult-like. It’s assumed that the top 5% of undergraduate students will go on to graduate school. Except for the outlier case in which one is recruited for a high-level role at the next Facebook, smart undergraduate students are expected to go straight into graduate school. Then, to leave graduate school (which about half do, before the PhD) is seen as a mark of failure. Few students actually fail out on a lack of ability (if you’re smart enough to get in, you can probably do the work) but a much larger number lose motivation and give up. Leaving after the PhD for, say, finance is also viewed as distasteful. Moreover, while it’s possible to resume a graduate program after a leave of absence or to join a graduate program after a couple years of post-college life, those who leave the academic track at any time after the PhD are seen as damaged goods, and unhireable in the academic job market. They’ve committed a cardinal sin: they left. (“How could they?”) Those who leave academia are regarded as apostates, and people outside of academia are seen as intellectual lightweights. With an attitude like that, social isolation is expected. People who have started businesses and formed unions and organized communities could help academics get out of their self-created sand trap of irrelevance. The problem is that the ivory tower has such a culture of arrogance that it will never listen to such people.

Seem familiar?

Now, we focus on Silicon Valley and the VC virus that’s been infecting the software industry. If we view the future as linear, Silicon Valley seems to be headed not for irrelevance or failure but for the worst kind of success. Of course, history isn’t linear and no one can predict its future. I know what I want to happen. As for what will, and when? Some people thought I made a fool of myself when I challenged a certain bloviating, spoiled asshat to a rap duel– few people caught on to the logic of what I was doing– and I’m not going to risk making a fool of myself, again, by making predictions.

Software engineers, like academics, have a dreadful lack of coordinative social skill. Not only that, but the Silicon Valley system, as it currently exists, requires that. If software engineers had the collective will to fight for themselves, they’d be far better treated and be running the place, and it would be a much better world overall but the current VC kingmakers wouldn’t be happy. Unfortunately, the Silicon Valley elite has done a great job of dividing makers on all sorts of issues: gender, programming languages, the H1-B program, and so on… all the while, the well-connected investors and their shitty paradrop executive friends make tons of money while engineers get abused– and respond by abusing each other over bike-shed debates like code indentation. When someone with no qualifications other than going to high school with a lead investor is getting a $400k-per-year VP/Eng job and 1% of the equity, and engineers are getting 0.02%, who fucking cares about tabs versus spaces?

Is Silicon Valley headed down the same road as academia? I don’t know. The analogue of “research über alles” seems to be a strange attitude that mixes young male quixotry, open-source obsession– and I think that open-source software is a good thing, but less than 5% of software engineers will ever be paid to work on it, and not everyone without a Github profile is a loser– and crass commercialism couched in allusions to mythical creatures. (“Billion-dollar company” sounds bureaucratic, old, and lame; “unicorn” sounds… well, incredibly fucking immature if you ask me, but I’m not the target market.) If that culture seems at odds with itself, that’s an accurate perception. It’s intentionally muddled, self-contradictory, and needlessly divisive. The culture of Silicon Valley engineering is one created by the colonial overseers, and not by the engineers. Programmers never liked open-plan offices and still don’t like them, and “Scrum” (at least, Scrum in practice) is just a way to make micromanagement sound “youthy”.

For 1970s academia, there was no external force that tried to ruin it or (as has been done with Silicon Valley) turn it into an emasculated colonial outpost for the mainstream business elite. Academia created its own destruction, and the tenure system allowed it by enabling the arrogance of the established (which ruined the job prospects of the next generation). It was, I would argue, purely a lack of coordinative social skill, brought on by a cult-like social isolation, that did this. I would argue, though, that Silicon Valley was destroyed (and so far, the destruction is moral but not yet financial, insofar as money is still being made, just by the wrong people) intentionally. We only need examine one dimension of social skill– a lack of coordinative skill– to understand academia’s decline. In Silicon Valley, there are two at play: the lack of coordinative social skill among the makers who actually build things, and the manipulative social skills deployed by psychopaths, brought in by the mainstream business culture, to keep the makers divided over minutiae and petty drama. What this means, I am just starting to figure out.

Academia is a closed system and largely wants to be so. Professors, in general, want to be isolated from the ugliness of the mainstream corporate world. Otherwise, they’d be in it, making three times as much money on half the effort. However, the character of Silicon Valley makers (as opposed to the colonial overseers) tends to be ex-academic. Most of us makers are people who were attracted to science and discovery and the concept of a “life of the mind”, but left the academy upon realizing its general irrelevance and decline. As ex-academics, we simultaneously have an attitude of rebellion against it and a nostalgic attraction to its better traits including its “coziness”. What I’ve realized is that the colonial overseers of Silicon Valley are very adept at exploiting this. Take the infantilizing Google Culture, which provides ball pits and free massage (one per year) but has closed allocation and Enron-style performance reviews. Google, knowing that many of its best employees are ex-academics– I consider grad-school dropouts to be ex-academic– wants to create the cult-like, superficially cozy world that enables people to stop asking the hard questions or putting themselves outside of their comfort zones (which seems to be a necessary pre-requisite for developing or deploying coordinative social skills).

In contrast to academia, Silicon Valley makers don’t want to be in a closed system. Most of these engineers want to have a large impact on the world, but a corporation can easily hack them (regardless of the value of the work they’re actually doing) by simply telling them that they’re having an effect on “millions of users”. This enables them to get a lot of grunt work done by people who’d otherwise demand far more respect and compensation. This ruse is similar to a cult that tells its members that large donations will “send out positive energy waves” and cure cancer. It can be appealing (and, again, cozy) to hand one’s own moral decision-making over to an organization, but it rarely turns out well.


I’ve already said that I’m not going to try to predict the future, because while there is finitude in foolishness, it’s very hard to predict exactly when a system runs out of greater fools. I don’t think that anyone can do that reliably. What I will do is identify points of strain. First, I don’t think that the Silicon Valley model is robust or sustainable. Once its software engineers realize on a deep level just how stacked the odds are against them– that they’re not going to be CEOs inside of 3 years– it’s likely either to collapse or to be forced to evolve into something that has an entirely different class of people in charge of it.

Right now, Silicon Valley prevents engineer awakening through aggressive age discrimination. Now, ageism is yet another trait of software culture that comes entirely from the colonial overseers. Programmers don’t think of their elders as somehow defective. Rather, we venerate them. We love taking opportunities to learn from them. No decent programmer seriously believes that our more experienced counterparts are somehow “not with it”. Sure, they’re more expensive, but they’re also fucking worth it. Why does the investor class need such a culture of ageism to exist? It’s simple. If there were too many 50-year-old engineers– who, despite being highly talented, never became “unicorn” CEOs, either because of a lack of interest or because CEO slots are still quite rare– kicking around the Valley, then the young’uns would start to realize that they, also, weren’t likely enough to become billionaires from their startup jobs to justify the 90-hour weeks. Age discrimination is about hiding the 50th-percentile future from the quixotic young males that Silicon Valley depends on for its grunt work.

The problem, of course, with such an ageist culture is that it tends to produce bad technology. If there aren’t senior programmers around to mentor the juniors and review the code, and if there’s a deadline culture (which is usually the case) then the result will be a brittle product, because the code quality will be so poor. Business people tend to assume that this is fixable later on, but often it’s not. First, a lot of software is totaled, by which I mean it would take more time and effort to fix it than to rewrite it from scratch. Of course, the latter option (even when it is the sensible one) is so politically hairy as to be impractical. What often happens, when a total rewrite (embarrassing the original architects) is called, is that the team that built the original system throws so much political firepower (justification requests, legacy requirements that the new system must obey, morale sabotage) at it that the new-system team is under even tighter deadlines and suffers from more communication failures than the original team did. The likely result is that the new system won’t be any good either. As for maintaining totaled software for as long as it lives, these become the projects that no one wants to do. Most companies toss legacy maintenance to their least successful engineers, who are rarely people with the skills to improve it. With these approaches blocked, external consultants might be hired. The problem there is that, while some of these consultants are worth ten times their hourly rate, many expensive software consultants are no good at all. Worse yet, business people are horrible at judging external consultants, while the people who have the ability to judge them (senior engineers) have a political stake and therefore, in evaluating and selecting external code fixers, will affected by the political pressures on them. The sum result of all of this is that many technology companies built under the VC model are extremely brittle and “technical debt” is often impossible to repay. In fact, “technical debt” is one of the worst metaphors I’ve encountered in this field. Debt has a known interest rate that is usually between 0 and 30 percent per year; technical debt has a usurious and unpredictable interest rate.

So what are we seeing, as the mainstream business culture completes its colonization of Silicon Valley? We’ve seen makers get marginalized, we’ve seen an ageism that is especially cruel because it takes so many years to become any good at programming, and we’ve seem increasing brittleness of the products and businesses created, due to the colonizers’ willful ignorance of the threat posted by technical debt.

Where is this going? I’m not sure. I think it behooves everyone who is involved in that game, however, to have a plan should that whole mess go into a fiery collapse.

Employees at Google, Yahoo, and Amazon lose nothing if they unionize. Here’s why.

Google, Yahoo, and Amazon have one thing in common with, probably, the majority of large, ethically-challenged software companies. They use stack-ranking, also known as top-grading, also known as rank-and-yank. By top-level mandate, some pre-ordained percentage of employees must fail. A much larger contingent of employees face the stigma of being labelled below-average or average, which not only blocks promotion but makes internal mobility difficult. Stack ranking is a nasty game that executives play against their own employees, forcing them to stab each other in the back. It ought to be ended. Sadly, software engineers do not seem to have the ability to get it abolished. They largely agree that it’s toxic, but nothing’s been done about it, and nothing will be done about it so long as most software engineers remain apolitical cowards who refuse to fight for themselves.

I’ve spent years studying the question of whether it is good or bad for software engineers in the Valley to unionize. The answer is: it depends. There are different kinds of unions, and different situations call for different kinds of collective action. In general, I think the way to go is to create guilds like Hollywood’s actors’ and writers’ guilds, which avoid interfering with meritocracy with seniority systems or compensation ceilings, but establish minimum terms of work, and provide representation and support in case of unfair treatment by management. Stack ranking, binding mandatory arbitration clauses, non-competes, and the mandatory inclusion of performance reviews in a candidate’s transfer packet for internal mobility could be abolished if unions were brought in. So what stands to be lost? A couple hundred dollars per year in dues? Compared to the regular abuse that software engineers suffer in stack-ranked companies, that has got to be the cheapest insurance plan that there is.

To make it clear, I’m not arguing that every software company should be unionized. I don’t think, for example, that a 7-person startup needs to bring in a union. Nor is it entirely about size. It’s about the relationship between the workers and management. The major objections to unionization come down to the claim that they commoditize labor; what once could have had warm-fuzzy associations about creative exertion and love of the work is now something where people are disallowed from doing it more than 10 hours per day without overtime pay. However, once the executives have decided to commoditize the workers’ labor, what’s lost in bringing in a union? At bulk levels, labor just seems to become a commodity. Perhaps that’s a sad realization to have, and those who wish it were otherwise should consider going independent or starting their own companies. Once a company sees a worker as an atom of “headcount” instead of an individual, or a piece of machinery to be “assigned” to a specific spot in the system, it’s time to call in the unions. Unions generally don’t decelerate the commoditization of labor; instead, they accept it as a fait accompli and try to make sure that the commoditization happens on fair terms for the workers. You want to play stack-ranking, divide-and conquer, “tough culture” games against our engineers? Fine, but we’re mandating a 6-month minimum severance for those pushed out, retroactively striking all binding mandatory arbitration clauses in employment contracts should any wrongful termination suits occur, offering to pay legal expenses of exiting employees, and (while we’re at it) raising salaries to a minimum of $220,000 per year. Eat it, biscuit-cutters.

If unions come roaring into Silicon Valley, we can expect a massive fight from its established malefactors. And since they can’t win in numbers (engineers outnumber them) they will try to fight culturally, claiming that unions threaten to create an adversarial climate between engineers and management. Sadly, many young engineers will be likely to fall for this line, since they tend to believe that they’re going to be management inside of 30 months. To that, I have two counterpoints. First, unions don’t necessarily create an adversarial climate; they create a negotiatory one. They give engineers a chance to fight back against bad behaviors, and also provide a way for them to negotiate terms that would be embarrassing for the individual to negotiate. For example, no engineer, while he’s negotiating a job offer, can talk about about ripping out the binding mandatory negotiation clause (it signals, “I’m considering the possibility, however remote, that I might have to sue you”) or fight against over-broad IP assignments (“I plan on having side projects which won’t directly compete with you, but that may compete for my time, attention and affection”) or non-competes (“I haven’t ruled out the possibility of working for a competing firm”). Right now, the balance of power between employers and employees in Silicon Valley is so demonically horrible that simply insisting on having one’s natural and legal rights makes that prospective employee, in HR terms, a “PITA” risk and that will end the discussion right there. Instead, we need a collective organization that can strike these onerous employment terms for everyone.

When a company’s management plays stack-ranking games against its employees, an adversarial climate between management and labor already exists. Bringing in a union won’t create such an environment; it will only make the one that exists more fair. You absolutely want a union whenever it becomes time to say, “Look, we know that you view our labor as a commodity– we get it, we’re not special snowflakes in your eyes, and we’re fine with that– so let’s talk about setting fair terms of exchange”.

Am I claiming that all of Silicon Valley should be unionized? Perhaps an employer-independent and relatively lightweight union like Hollywood’s actors’ and writers’ guilds would be useful. With the stack-rank companies in particular, however, I think that it’s time to take the discussion even further. While I don’t support absolutely everything that people have come to associate with unions, the threat needs to be there. You want to stack-rank our engineers? Well, then we’re putting in a seniority system and making you unable to fire people without our say-so.

At Google, for example, engineers live in perennial fear of “Perf” and “the Perf Room”. (There actually is no “Perf Room”, so when a Google manager threatens to “take you into the Perf Room” or to “Perf you”, it’s strictly metaphorical. The place doesn’t actually exist, and while the terminology often gets a bit rapey– an employee assigned a sub-3.0 score is said to be “biting the pillow”– all that actually happens is that a number is inserted into a computerized form.) Perf scores, which are often hidden from the employee, follow him forever. They make internal mobility difficult, because even average scores make an engineer less desirable as a transfer candidate than a new hire– why take a 50th- or even 75th-percentile internal hire and risk angering the candidate’s current manager, when you can fill the spot with a politically unentangled external candidate? The whole process exists to deprive the employee of the right to state her own case for her capability, and to represent her performance history on her terms. And it’s the sort of abusive behavior that will never end until the executives of the stack-ranked companies are opposed with collective action. It’s time to take them, and their shitty behaviors, into the Perf Room for good.

Anger’s paradoxical value, and the closing of the Middle Path in Silicon Valley


Anger is a strange emotion. I’ve made no efforts to conceal that I have a lot of it, and toward such vile targets (such as those who have destroyed the culture of Silicon Valley and, by extension due to that region’s assigned status of leadership, the technology industry) that most would call it “justified”. Anger is, however, one of those emotions that humans prefer to ignore. It produces (in roughly increasing order of severity) foul language, outbursts, threats, retaliations and destroyed relationships, and frank physical violence. The fruits of anger are disliked, and not for bad reasons, because most of those byproducts are horrible. Most anger is, additionally, a passing and somewhat errant emotion; the target of the anger might not be deserving of violence, retaliation, or even insults. In fact, some anger is completely unjustified; so it’s best not to act on anger until we’ve had a chance to process and examine it. The bad kind of anger tends to be short-lived but, if humans acted on it when it emerged, we wouldn’t have made it this far as a species. Still, most of us agree that much anger, especially the long-lived kind that doesn’t go away, is justified in some moral sense. To be angry, three years later, at an incompetent driver is deemed silly. To be angry over a traumatic incident or a life-altering injustice is held as understandable.

However, is justified anger good? The answer, I would say, is paradoxical. For the individual, anger isn’t good. I’m not saying that the emotion should be ignored or “bottled in”. It should be acknowledged and let to pass. Holding on to it forever is, however, counterproductive. It’s stressful and unpleasant and sometimes harmful. As Buddha said, “holding on to anger is like grasping a hot coal with the intent of throwing it at someone else; you are the one who gets burned.” Anger, held too long, is a toxic and dreadful emotion that seems to be devoid of value– to the individual. This isn’t news. So what’s the issue? Why am I interested in talking about it? Because anger is extremely useful for the human collective.

Only anger, it often seems, can muster the force that is needed to overthrow evil. Let’s be honest: the problem has its act together. We aren’t going to overthrow the global corporate elite by beaming love waves at them. No one is going to liberate the technology industry from its Damaso overlords with a message of hope and joy alone. We can probably get them to vacate without forcibly removing them, but it’s not going to happen without a threatening storm headed their way. Any solution to any social problem will involve some people getting hurt, if only because the people who run the world now are willing to hurt other people, by the millions, in order to protect their positions.

Anger is, I’m afraid, the emotion that spreads most quickly throughout a group, and sometimes the only thing that can hold it together. Of course, this can be a force for good or for evil. Many of history’s most noted ragemongers were people did bad to the world. I would, however, say that this fact makes the argument that, if good people shy away from the job of spreading indignation and resentment, then only evil people will being doing it. For me, that’s an upsetting realization.

Whether we’re talking about “yellow journalism” or bloggers or anyone else who fights for social change, spreading anger is a major part of what they do. It’s something that I do, often consciously. The reason, when I discuss Silicon Valley’s cultural problems, for me to mention Evan Spiegel or Lucas Duplan (for the uninitiated, they are two well-connected, rich, unlikeable and unqualified people who were made startup founders) is because they inspire resentment and hatred. Dry discussions of systemic problems don’t lead to social change; they lead to more dry debate and that debate leads to more debate, but nothing ever gets done until someone “condescends” to talk to the public and get them pissed off. For that purpose, a Joffrey figure like Evan Spiegel is just much “catchier”. This is why founder-quality issues like Duplan and Spiegel, and “Google Buses”, are a better vector of attack against Sand Hill Road than the deeper technical reasons (e.g. principal-agent problems that take kilowords to explain in detail) for that ecosystem’s moral failure. It’s hard to get people riled up about investor collusion, and much easier to point to this picture of Lucas Duplan.

This current incarnation of Silicon Valley needs to be pushed aside and discarded, because it’s hurting the world. The whole ecosystem– the shitty corporate cultures with the age discrimination and open-plan fetishism, the juvenile talk about “unicorns” because it’s a cute way of covering up the reality of an industry that only cares about growth for its own sake, the insane short-term greed, the utter lack of concern for ethics, the investor collusion, and the founder-quality issues– needs to be burned to the ground so we can build something new. And I have enough talent that, while I can’t change anything on my own, I can contribute. When I (unintentionally) revealed the existence of stack-ranking at Google to the public, I damaged that company’s reputation. The degree to which I did so is probably not significant, relative to its daily swings on the stock market, but with enough people in the good fight, victory is possible.

Here’s what I don’t like. Clearly, anger is painful for the person experiencing it. As an individual, I would be better to let it pass. I can personally deal with the pain of it, but it’s leads me to question whether there is social value in disseminating it. And yet, without people like me spreading and multiplying this justified anger at the moral failure of Silicon Valley, no change will occur and evil will win. This is what makes anger paradoxical. As an individual, the prudent thing to do is to let it go. For society, moral justice demands that it spread and amplify. Even if we accept that collective anger can just as easily be a force for bad (and it can) we still have to confront the fact that if good people decline to spread and multiply anger against evil, then the sheer power of collective anger will be wielded only by evil. We need, as a countervailing force, for the good people to comprehend and direct the force of collective anger.

The Middle Path

Why do I detest Silicon Valley? I don’t live there, and I have better options than to take a pay cut in exchange for 0.03% of a post-A startup, so why does that cesspool matter to me at this point? In large part, it’s because the Bay Area wasn’t always a cesspool. It used to be run by lifelong engineers for engineers, and now it’s some shitty outpost of the mainstream business culture, and I find that devolution to be deplorable. The Valley used to be a haven for nerds (here, meaning people who value intellectual fulfillment more than maximizing their wealth and social status) and now it’s become a haven for MBA-culture rejects who go West to take advantage of the nerds. It’s a joke, it’s awful, and it’s very easy to get angry at it. But why? Why is it worth anger? Shouldn’t we divest ourselves, emotionally, and be content to let that cesspool implode?

I don’t care about Silicon Valley, meaning the Bay Area, but I do care about the future of the technology industry. Technology is just too important to the future of humanity for us to ignore it, or to surrender it to barbarians. The technology industry used to represent the Middle Path between the two undesirable options of (a) wholesale subordination to the existing elite and (b) violent revolt. It was founded by people who neither wanted to acquiesce to the Establishment nor to overthrow it with physical force. They just wanted to build cool things, to indulge their intellectual curiosities, and possibly to outperform an existing oligarchy and therefore refute its claims of meritocracy.

Unfortunately, Silicon Valley became a victim of its own success. It outperformed the Establishment and so the Establishment, rather than declining gracefully into lesser relevance, found a way to colonize it through the good-old-boy network of Bay Area venture capital. To be fair, the natives allowed themselves to be conquered. It wasn’t hard for the invaders to do, because software engineers have such a broken tribal identity and such a culture of foolish individualism that divide-and-conquer tactics (like– for a modern example that illustrates how fucked we are as a tribe, “Agile”/Scrum– which has evolved into a system where programmers rat each other out to management for free) worked easily. Programmers are, not surprisingly, prone to a bit of cerebral narcissism, and the result of this is that they lash out with more anger at unskilled programmers and bad code than against the managerial forces (lack of interest in training, deadline culture) that created the bad programmers and awful legacy code in the first place. It’s remarkably easy for a businessman to turn a group of programmers against itself, so much so that any collective action (either a labor union, or professionalization) by programmers remains a pipe dream. The result is a culture of individualism and arrogance where almost every programmer believes that most of his colleagues are mouth-breathing idiots (and, to be fair, most of them are severely undertrained). There’s a joke in Silicon Valley about “flat” software teams where every programmer considers himself to be the leader, but it’s not entirely a joke. In the typical venture-funded startup, the engineers each believe that they’ll have investor contact within 6 months and founder/CEO status inside of 3 years. (They wouldn’t throw down 90-hour weeks if it were otherwise.) By the time programmers are old enough to recognize how rarely that happens (and how even more rarely people actually get rich in this game, unless they were born into the contacts that put them on the VC side or can have them inserted in high positions in portfolio companies, allowing diversification) they’re judged as being too old to program in the Valley. That is too convenient for those in power to be attributed to coincidence.

Sand Hill Road needs to be taken down because it has blocked the Middle Path that used to exist in Silicon Valley, and that should exist, if not in that location, in the technology industry somewhere. The old Establishment might have its territory chipped away (harmlessly, most often, because large corporations don’t die unless they do it to themselves) by technology startups, and it was content to have this happen because, so often, the territory it lost was what it didn’t understand well enough to care about. The new Establishment, on Sand Hill Road, is harder to outperform because, if it sees you as a threat, it will fund your competitors, ruin your reputation, and render your company unable to function.

I don’t believe that Silicon Valley’s closing of the Middle Path will be permanent, and it’s best for all of us that it not be. I am obviously not in favor of subordination to the global elite. They are the enemy, and something will have to be done about, or at least around, them in order to reverse the corruption and organizational decay that they’ve inflicted on the world. On the other hand, I view violent revolution as an absolute last resort. Violence is preferable to subordination and defeat, but nonetheless it is usually the absolute worst possible way to achieve something. Disliking the extremes, I want the moderate approach: effective opposition to the enemies of progress, without the violence that so easily leads to chaos and the harm of innocents. So when the mainstream business elite enters a space (like technology) in which it does not belong, colonizes it, and thereby blocks the Middle Path, it’s a scary proposition. Of course I cannot predict the future, but I can perceive risks; and the closing of the Middle Path represents too much of a risk for us to allow it. If the Middle Path has closed in venture-funded technology in the Valley, it’s time to move on to something else.

Do I think that humanity is doomed, simply because a man-child oligarchy in one geographical area (“Silicon Valley”) has closed the Middle Path when it existed in their location? Of course not. Among those in the know, the VC-engorged monstrosity that now exists in the Valley has ceased to inspire, or even to lead. It seems, then, that it is time to move past it, and to figure out where to open a new Middle Path.

If getting people to do this– to recognize the importance of doing this– requires a bit of emotional appeal along a vector such as anger or resentment, I’ll be around and I know how to pull it off.