A 3-tiered model of trust, and how con men hack people.

Something I’ve observed in a variety of human organizations, including almost all businesses, is that the wrong people are making major decisions. I’m not talking about second-best players or even mediocrities becoming leaders; I’m talking about the rise of people who shouldn’t even be trusted with a bag of rock salt. White-collar social climbers with no more integrity than common con artists are the ones to rise through the ranks, while the most honest people (some deserving, most not) are the ones to stagnate or be pushed out. Why is this happening? It’s not that all successful and powerful people are bad. Some are; most aren’t. The problem is more subtle: it’s that the wrong people are trusted. Good people are probably slightly more likely to succeed than bad people at forming companies, but bad people rise through the ranks and take them over nonetheless. To understand why this happens, it’s important to understand trust, and why it is so easy for a class of people to earn trust they don’t deserve, and to retain that trust in spite of bad actions.

As I work my way through George R. R. Martin’s A Song of Ice and Fire, I’m starting to get a sense of just how well this author understands human nature. Unlike many fantasy novels with clear heroes and cosmic villains, the moral topology of Martin’s world is approached from several dog’s eye views, without omniscient or prescriptive narration. It’s not clear who the heroes and villains are. Charming characters can be treacherous, while those hardest to love are the most interesting. Martin writes using limited third-person narration, but each chapter from a different character’s point of view. What is most interesting is how the perception of a character changes once his or her intentions are revealed. In a novel, you actually can understand the motivations of characters– even dangerous and disliked ones like Jaime Lannister and Theon Greyjoy. You can get the whole story. In real life, people only get their own.

Something emerges as I relate the moral questions posed by narrative to the murkier world of human interaction, and it’s why people (myself included) are generally so awful at judging character. I’ve come to the conclusion that, subconsciously, most of us model the questions of peoples’ trustworthiness with a three-tiered approach. The superficial tier is that person’s speech and social skill. What does he say? The middle tier is the person’s actions. What does he do? The deepest tier is that person’s intention. What does he want? For better or worse, our tendency to separate people into “good” and “evil” relies on our assessment of a person’s true intention, rather than that person’s action.

A person who does seemingly bad things for good purposes is a dark hero, like Severus Snape in the Harry Potter series. A person who does good things for bad intentions (consider the Manhattan charity scene, a theater for social climbing more than service) is a disliked phony. This attitude would make a lot of sense, if we could reliably read peoples’ intentions. We develop first-degree trust in a person if we find that person to be socially pleasant. At this level, we’d invite that person to a party, but not share our deepest secrets. We develop second-degree trust in people who do things we like, and who refrain from doing things we dislike. Most people would call a mutual relationship of second-degree trust a friendship, although friendship involves other axes than trust alone. Third-degree trust is reserved for people we believe have the best intentions: people who might commit actions we dislike (potentially having information we don’t) but who we believe will do the right thing.

If the exploit isn’t visible, I’ll spell it out cleanly. In the real world, one really never knows what another person’s intentions are. That’s pure guesswork. Unlike in fiction, we only know our own intentions, and sometimes not even that. We have a desperate desire to know others’ intentions, but we never will. The quality of evidence available to us, even for the most perceptive and socially skilled people, is poor. So, this admits a hack. What tends to happen when knowledge is impossible to have but people desperately want it? People come up with explanations, and those with the most pleasing ones profit. Many religious organizations and movements exist on this principle alone. That which is said in the right way can appear to betray intentions. In other words, a first-level interaction (what the person says) is dressed up as carrying third-degree knowledge (of intimate intention).

This is how con artists work, but it also explains the operation of white-collar social climbers and the shenanigans that corporations use, in the guise of corporate “culture” and “changing the world”, to encourage naive young people to work three times as hard as they need to, for half the reward. They create a ruse of transparency about their intentions, earning some measure of third-degree trust from the naive. What this allows them to do is be malevolent on the second degree (i.e. perform bad actions, including those harming the finances and careers of their victims) and have a surprising number of loyal acolytes (including victims) making excuses for this behavior.

Essentially, this is the first tier of interaction and trust (the superficial one) overriding the second (of actions) by masquerading as the third (of intentions). It’s an exploit that exists because people don’t want to admit to the true nature of the world they live in, which is one where another person’s intentions are almost always opaque. This doesn’t mean most people are “bad” (not true) or have “hidden agendas” (true but irrelevant, in that all “agendas” are equally hidden)– it’s just the structural nature of a world where minds are very difficult, and sometimes impossible, to read. People have a hard time accepting this limitation, especially because the most socially confident seem not to have it, even though all people do. They compensate by developing the notion that they can read others’ intentions, a foolish confidence in their own social skill.

Some people are easy to read. For example, infants usually cry because they’re cold, hot, hungry, thirsty, or in pain. Children are, likewise, often relatively easy to read. The least socially skilled third of adults are generally easy to understand, at least partially, in this way. Moreover, assessments of motivation are often made as a sort of social punishment for undesirable actions: it’s bad enough for this person to be caught, but the insult is the assessment of his motivation. It’s a paternalistic way of calling someone a child. I know what you’re up to. It’s an assertion of confidence that often has no basis, but it gives a certain class of people confidence in their paternalistic superiority. People with this attitude tend to grow in their foolish confidence as they become more successful and powerful, and to their detriment. As they rise, they need lackeys and lieutenants and advisors. They need to trust people; most of all, they need to believe they can trust peoples’ intentions. Of course, they’ve also been shaped by experience into a person with supreme confidence in their own ability to judge others’ character…

Enter the psychopath. Contrary to popular depiction, most psychopaths are not murderers, rapists, or torturers. The majority of them are not violent, and those with violent intentions are usually able to have others do their dirty work. Most eschew violence, which is dangerous, illegal, and almost never confers any benefit (financial or social) in modern times. They’d rather rob people than kill them– it’s easier, and the rewards are greater. Also, it’s an open question whether psychopathy is “mental illness”, but there is no connection between psychopathy and psychosis, the latter rarely being associated with mental effectiveness or social skill. Instead, psychopaths’ minds tend to be as clear as anyone else’s. What characterizes the psychopath is a lack of conscience and an infinitely deep selfishness. Also, most of them are exceptionally skilled actors. Although their emotional growth tends to be stunted in childhood or adolescence, they can mimic as wide a range of emotions as anyone else. In fact, they are superior to typical people at having the “right” emotions for various circumstances. Psychopaths have no tell-tale signs, and they don’t seem like “mean” people. They are effectively invisible. Among the upper management of most companies, they are surprisingly common, yet never detected until after they’ve done their damage.

Psychopaths could not be more at home than they are in the white-collar social climbing theater of the typical corporation. The outsized rewards for corporate officers feed their narcissism, the intrigues enable their cutthroat tendencies, and their superficial charm enables their effortless rise. They acquire (misplaced) trust quickly, on account of their unusually high skill at emotional mimicry. They are not supernatural, so they cannot read the intentions of those they intend to please. Instead, they dress their intentions in such a way that the people in power will read whatever they want to see. Like “psychics”, they hedge what they say with the purpose of being right by those in power on account of flexible interpretation. They seem to have “vision” and character because they can exploit the “just like me” fallacy of their superiors. In reality, they are the worst kind of mercenary turncoat. Their “vision” is of themselves on top of something, but that could be a mountain of gold or of bones. They don’t care, as long as they win and others lose.

After a psychopath has run his course, the company where he worked is usually damaged immensely. Million- or billion-dollar losses can occur, top executives can be jailed, and thousands of jobs can be cut. Psychopaths burn whatever is no longer useful to them. After this, people tend to back-reason their interactions with that person. “I knew he was up to something.” “I never liked him.” In most cases, that’s not accurate. What really happened is this: it was obvious that this person’s actions (second level) were risky, harmful, or even criminal, but the person was so effective at making it seem that he had the right intentions (third level) that people ignored the obvious warning signs. They made excuses. They misinterpreted the person’s superficial charm as a sign of good intentions, and they were burned. Or, perhaps this word is better: they were hacked.

Functional programming is a ghetto

Functional programming is a ghetto.

Before any flamewars can start, let me explain exactly what I mean. I don’t mean “functional programming sucks”. Far from it. The opposite, actually. Not all ghettos are poor, crime-ridden, and miserable. Jewish ghettos existed for centuries in Europe, from the Renaissance to World War II, and many were intellectual centers of the world. Some were quite prosperous. Harlem was, at one time, an upper-middle-class African-American community at the center of some of America’s most important artistic contributions. The same is true of functional programming. It’s the underappreciated intellectual capital of the programming world, in that its ideas (eventually) trickle down into the rest of the industry, but it’s still a ghetto. A ghetto is the urban analog of a geographic enclave: it’s included in the metropolis, but culturally isolated and usually a lot smaller than the surrounding city. It often harbors those who’ve struggled outside of it. Those who become too used to its comforts view the outside world with suspicion, while those on the outside have a similar attitude of distrust toward those within. Ghettos usually imply that there’s something involuntary about being there, but that’s often not the case. Chinatowns are voluntary ghettos, in the non-pejorative sense, as are some religious communities like monasteries. “Functional programming” is, likewise, a voluntary ghetto. We’ve carved out an elite niche in the software industry, and many of us refuse to work outside of it, but we’re all here by choice.

What is functional programming? Oddly enough, what I’m about to talk about is not functional programming in the purist sense, because most “functional programmers” are not averse to using side effects. The cultural issues surrounding functional programming are not about some abstract dislike of computational effects, but rather an understanding of the necessity of managing the complexity they create, and using tools (especially languages) that make sane development possible. Common Lisp, Scala, and Ocaml are not purely functional languages, but they provide native support for the abstractions that make functional programming possible. What real functional programmers do is “multi-paradigm”– mostly functional, but with imperative techniques used when appropriate. What the debate comes down to is the question of what should be the primary, default “building block” of a program. To a functional programmer, it’s a referentially-transparent (i.e. returning the same output every time per input, like a mathematical function) function. In imperative programming, it’s a stateful action. In object-oriented programming, it’s an object, a more general construct that might be a referentially-transparent function, might represent an action in a hand-rolled domain-specific language (DSL) or might be something else entirely. Of course, most “object-oriented programming” becomes a sloppy mix of multiple styles and ad-hoc DSLs, especially as more than one developer comes to work on an object-oriented project. That’s a rant for later.

In general, functional programming is right. The functional approach is not right for every problem, but there is a right answer regarding the default abstractions for building most high-level programs: immutable data, and referentially transparent functions should be the default, except in special cases where something else is clearly more appropriate. Why? A computational action is, without more knowledge, impossible to test or reason about, because one cannot control the environment in which it exists. One needs to be able to know (and usually, to control) the environment in which the action occurs in order to know if it’s being done right. Usually, the tester wants to be able to cover all special cases of this action, in which case knowing the environment isn’t enough; controlling it is also necessary. But at this point, the state of the environment in which the test happens is an implicit parameter to the action, and making it explicit would make it a referentially-transparent function. In many cases, it’s better to do so– when possible. It might not be. For example, the environment (e.g. the state in a computer cluster database) might be too large, complex, or volatile to explicitly thread into a function. Much of real-world functional programming is not about eliminating all state, but about managing what state is intrinsic, necessary or appropriate.

For a concrete example, let’s say I have a blackboard, face down, with a number (state) on it, and I ask someone to read that (call it n) and erase the number, then write n+1 on the blackboard. I’m assuming he won’t lie to me, and that he’s physically capable of lifting the board; I want to determine if he can carry out this operation. If I don’t know n, and only see what is written after he is done, I have no hope of knowing whether the person carried out my order correctly. Of course, I could control the testing enviroment and write 5 on the blackboard before asking him to do this. If he writes 6, then I know he did what I asked him to do. At that point, though, the blackboard isn’t necessary. It’s more lightweight to just ask him, “what is 5 + 1?” I’ve moved from an imperative style of testing to a functional one: I’m determining whether his model of the addition function gives the right answer, rather than putting him through an exercise (action) and checking the state after it is done. The functional alternative is a unit test. I’m not trying to assess whether he knows how to turn over a blackboard, read it, erase it, and write a new number on it, because I only care about whether he can add. If I want to assess all of those as well, then I need to make an integration test of it. Both types of test are necessary in real-world software engineering, but the advantage of unit tests is that they make it easy to determine exactly what went wrong, facilitating faster debugging.

Testing, debugging, and maintenance are a major component of real-world software engineering, and functional programming gives us the tools to tackle these problems in a tractable way. Functions should be referentially transparent and, ideally, small (under 20 lines when reasonable). Large functions should be broken up, hierarchically, into smaller ones, noting that often these small components can be used in other systems. Why is this desirable? Because modularity makes code reuse easier, it makes debugging and refactoring much simpler (fixes only need to be made in one place) and, in the long term, it makes code more comprehensible to those who will have to modify and maintain it. People simply can’t hold a 500-line object method in their heads at one time, so why write these if we can avoid doing so?

The reality, for those of us who call ourselves functional programmers, is that we don’t always write stateless programs, but we aim for referential transparency or for obvious state effects in interfaces that other programmers (including ourselves, months later) will have to use. When we write C programs, for example, we write imperative code because it’s an imperative language, but we aim to make the behavior of that program as predictable and reasonable as we possibly can.

Functional programming, in the real world, doesn’t eschew mutable state outright. It requires mindfulness about it. So why is functional programming, despite its virtues, a ghetto? The answer is that we tend to insist on good design, to such a degree that we avoid taking jobs where we’re at risk of having to deal with bad designs. This isn’t a vice on our part; it’s a learned necessity not to waste one’s time or risk one’s career trying to “fix” hopeless systems or collapsing companies. Generally, we’ve come to know the signs of necrosis. We like the JVM languages Clojure and Scala, and we might use Java-the-language when needed, but we hate “Java shops” (i.e. Java-the-culture) with a passion, because we know that they generate intractable legacy messes in spite of their best efforts. Say “POJO” or “Visitor pattern”, and you’ve lost us. This seems like arrogance, but for a person with a long-term view, it’s necessary. There are a million “new new things” being developed at any given time, and 995,000 of them are reincarnations of failed old things that are going to crash and burn. If I could characterize the mindset of a functional programmer, it’s that we’re conservative. We don’t trust methodologies or “design patterns” or all-purpose frameworks promising to save the world, we don’t believe in “silver bullets” because we know that software is intrinsically difficult, and we generally believe that the shiniest IDE provides us enough to compensate for its shortfalls and false comforts. For example, an IDE is useless for resolving a production crisis occurring on a server 3,000 miles away. We’d rather use vim or emacs, and the command line, because we know they work pretty much everywhere, and because they give us enough power in editing to be productive.

From a functional programmer’s perspective, it’s easy to mistake the rest of the software industry for “the ghetto” (especially considering the pejorative association, which I am trying to disavow, with that word). Our constructions are stable and attractive, and we do such a good job of cleaning up after ourselves that there’s not much horseshit on our streets. Outside our walls are slums with rickety, fifteen-story tenements that are already starting to lean. The city without is sloppy and disease-ridden and everything built in out there will be burned down, to kill the plague rats, in ten years. We don’t like to go there, but sometimes there are advantages of doing so– for one thing, it’s fifty times larger. If we lose awareness of size and scale and what this means, we can forget that we are in the ghetto. That’s not to say we shouldn’t live in one, for it’s a prosperous and intellectually rich ghetto we inhabit, but a ghetto it is.

I think most functional programmers only get a full awareness of this when we’re job searching, and thanks to most of us being in the top 5% of programmers, our job searches tend to be short. Still, I’ve lost count of the number of times over the past five years that I’ve found a job listing that looked interesting, except for its choice of language. “5 years of experience in Java, including knowledge of design-pattern best practices.” Nope. It might be a good company writing bad copy, but its technical choices look exactly the same as those of the bad firms, so how can I be sure? The process quickly becomes depressing. It’s not that Java or C++ are “dirty” languages that I would never use. It’s that any job that involves using these languages full-time is so likely to suck that it’s hardly worth investigating. Occasionally, C++ and Java are the right tools for the job, but no one should try to build a company on these languages. Not in 2012. Java isn’t a language that people choose to use, not for primary development. Not if they’ve used three or four languages in their career. It’s a language that people make other people use. Usually, it’s risk-averse and non-technical managers making that call. A Java Shop is almost always a company in which non-engineers call the shots.

What we call functional programming is somewhat of a shibboleth for good-taste programming. We prefer the best programming languages, like Ocaml and Clojure, but we don’t actually restrict ourselves to writing functional programs. Do we use C when it’s the right tool for the job? Hell yeah. Do we put mutable state into a program when it makes it simpler (as is sometimes the case)? Hell yeah. On the other hand, we trust the aesthetic and architectural decisions made by brilliant, experienced, gray-bearded engineers far more than we trust business fads. We have a conservative faith in simplicity and ease-of-use over the shifting tastes of mainstream managerial types and the superficial attractiveness of silver bullets and “methodologies”. We roll our eyes when some fresh-faced MBA tells us that structuring our calendar around two-week “iterations” will solve every software problem known to humankind. Unfortunately, this insistence (often in the face of managerial authority) on good taste makes us somewhat unusual. It stands out, it can be unpopular, and it’s not always good for one’s career. Few stand with us. Most leave our camp, either to become managers (in which case, even a Java Shop is a plausible employer) or to accept defeat and let bad taste win. It’s hard to live in a ghetto.

Now, I have little faith in the stereotypical average programmer, the one who never thinks a technical thought after 5:01 pm, and who doesn’t mind using Java full-time because the inevitable slop is the problem of some “maintenance guy”. That person probably shouldn’t be programming. On the other hand, we’re about 2 percent of the software industry, if that, right now. We can reach out. We can do better. We’re not so brilliant that the other 98% of programmers have no hope of joining us. Not even close. There are many IDE-using, Java-hacking, semi-bored developers who are just as smart as we are but haven’t seen the light yet. It’s our job to show it to them, and if we fail to convince them that they could become 2 to 10 times more productive than they ever dreamed of being, and that programming can become fun again, then we’re the ones to blame. We must reach out, and we can probably bring 10, 20, maybe even 30 percent of programmers over to the light side, bringing about dramatic changes in the software industry.

I think the integrity of our industry depends on our ability and willingness to figure out how to do this.

Java Shop Politics

Once, I was at a company that was considering (and eventually did so) moving its infrastructure over to Java, and there was a discussion about the danger of “Java Shop Politics”. It would seem strange to any non-programmer that a company’s choice of programming language would alter the political environment– these languages are just tools, right? Well, no. In this case, almost all of us knew exactly what was being talked about. Most software engineers have direct experience with Java Shop Politics, and it has a distinct and unpleasant flavor.

When Unix and C were developed, they were designed by people who had already experienced firsthand the evils of Big Software, or monolithic systems comprising of hundreds of thousands of lines of code without attention paid to modularity, that had often swelled to the point where no one understood the whole system. (In the 1970s, these monoliths were often written in assembly language, in which macroscopically incomprehensible code is not hard to create.) The idea behind the Unix environment, as a reaction to this, was to encourage people to write small programs and build larger systems using simple communication structures like pipes and files. Although C is a strictly compiled language and no Lisp-style REPL existed for it, C programs were intended to be small enough that the Unix operating system was an acceptable REPL. This was not so far from the functional programming vision, and far more practical in its time. The idea behind both is to write small programs (functional “building blocks”) that are easy to reason about, and build more complex systems out of them, while retaining the ability to piecewise debug simple components in event of failure. This style of development works extremely well, because it encourages people to build tools for general use, rather than massive projects that, if they fail, render almost all of the effort put into the project useless. As more code is written, this leads to the growth of generally useful libraries and executable utilities. In the big-program model of development, for a contrast, it leads to increasing complexity within one program, which can easily make whole-program comprehension impossible, and make decay (in the absence of forward refactoring) inevitable, regardless of programming language or engineer talent.

I adhere to this small-program mentality. I’m not saying “don’t be ambitious”. Be ambitious, but build large systems while keeping individual modules small. If a file is too big to read (for full comprehension) in one sitting, break it up. If “a program” is too big to read in a week, then it should be respected (even if it runs as one executable) as a system, and systems are harder to manage than single modules. While it is harder, up front, to develop in a modular style, the quality of the product is substantially higher, and this saves time in the long run.

Many software managers, unfortunately, like Big Projects. They like huge systems with names like cool-sounding names like “Rain Man” and “Dexter” that swell to a hundred thousand lines of code and provide features no one asked for. They like having the name of something Big, something their bosses might have heard of, in their story. Big Projects also provide a lot more in the way of managerial control. A manager can’t control the “chaotic”, often as-needed, growth of small-program development, whereas directing a Big Project is relatively straightforward. Make it Object-Oriented, use this set of analytic tools, complete at least 20 of the 34 feature-request tickets in the queue (it doesn’t matter which 20, just do 20) and have it done by Friday.

Beginning around 1990 was a pernicious attempt by software managers to commoditize programming talent. Enter C++, designed with unrelated intent, but a language that made big-program development seem palatable for C. C’s not high-level enough for most application development in 2012, but C++ is not the solution (except in very specific cases from numeric programming where templates allow very-fast code to be written once for a range of numeric types, with no runtime performance overhead) either. For over 90 percent of the applications that use it, it’s the wrong language. Here’s the metaphor I like to use: assembly coding is infantry: as fine-grained as one could want to be, but slow (to write), lumbering along at 3.1 miles per hour. C is a tank. It’s robust, it’s powerful, and it’s extremely impressive and well-designed, but it moves at ground speeds: about 50 miles per hour. That’s what it’s designed to do. Languages like Python and Ocaml are airplanes– very fast, from a development perspective, but not fine-grained at all. C++ exists because someone had a fever dream in which these two classes of vehicles got mixed up and thought, “I’m going to put wings on a fucking tank”. The drag and awkwardness imposed by the wings made it terrible as a tank, but it doesn’t fly well either. Java was invented after a few horrible tank-plane crashes, the realization being that the things are too powerful and fly too fast. It’s a similarly ridiculous “tank-icopter”. It’s not as fast as the tank-plane, and few people enjoy flying them, but it’s less likely to kill people.

It’s not that C++ or Java, as languages, are evil. They’re not. Languages are just tools. Java was designed to run in embedded systems like automatic coffee pots and cable-TV “set top boxes”, so closures were cut for time in the first release, because these use cases don’t require high-level programming features. I haven’t map-reduced a toaster cluster for years. Contrary to popular history, Java wasn’t designed with an ideological, “enterprise” distrust of the programmer. That trend, in the Java community, came later, when the 1990s attempt to commoditize programming talent (a dismal FailureFactory) co-opted the language.

Java and C++ became languages in which “object-oriented programming” was the norm. The problem is that what is currently called OOP is nothing like Alan Kay’s vision. His inspiration was the cell, which hides (encapsulates) immense mechanical complexity behind a simpler interface of chemical and electrical signals. The idea was that, when one needs complexity, simpler interfaces are invaluable, and that complex systems generally should have comprehensible interfaces. Kay was not saying, “go out there and create giant objects” or “use object-oriented programming everywhere”. He was attempting to provide tools for dealing with complexity when it becomes inevitable. Unfortunately, a generation of software managers took “object-oriented” magic and immodularity as virtues. This is similar to the “waterfall” software methodology, named by a person clearly stating it was the worst idea, and yet taken by suits as having been “recommended by some smart guy” once it was given a name.

What’s wrong with Big Project development? First, it encourages reliance on internal vaporware. Important work can be delayed “until Magneto is done”. When Magneto is done, that will solve all our problems. This works for managers seeking to allocate blame for slow progress– that fucking Magneto team can never get their shit done on time– but it’s a really bad way to structure software. In the small-program model, useful software is being continually released, and even an initiative that fails will provide useful tools. In the big-program arena, the project is either delivered in toto or not at all. What if half the Magneto team quits? What if the project fails for other reasons? Then Magneto est perdu. It’s a huge technical risk.

[ETA: when I wrote this essay, I was unaware that a company named Magneto existed. “Magneto”, in this essay, was a potential name for a large project in a hypothetical software company. There is absolutely no connection between these project names used here and real companies. Consider names like “Cindarella” and “Visigoth” to be gensyms.]

Second, under a big-program regime, people are trackable, because most programmers are “on” a single program. This is also something that managerial dinosaurs love, because it provides implicit time-tracking. Mark is on Cindarella, which has a headcount of 3. Sally is on Visigoth, which has a headcount of 5. Alan is on the project that used to be called 4:20 until Corporate said that name wasn’t okay and that now can’t decide if it wants to be called 4:21 or BikeShed.

What this kills, however, is extra-hierarchical collaboration– the lifeblood of a decent company, despite managerial objections. In a small-program software environment, people frequently help out other teams and contribute to a number of efforts. A person might be involved in more than 20 programs, and take ownership of quite a few, in a year. Those programs end up having general company-wide use, and that creates a lot of cross-hierarchical relationships. MBA dinosaurs, alas, hate cross-hierarchical, unmetered, work. That kind of work is impossible to measure, and a tightly-connected company makes it hard to fire people. On the other hand, if a programmer only works on one Big Project at a time, and has no other interaction with other teams, it’s much easier to “reduce headcount”.

The third problem with big-program methodology is that it’s inefficient. Six months can be spent “on-boarding” an engineer into the complexities of the massive Big Project he’s been assigned to work on. In light of the average job lasting two to three years, that’s just intolerable. The on-boarding problem also restricts internal mobility. If someone is a bad fit for his first Big Project, moving him to another means he could spend up to a year just on-boarding, accomplishing zilch. Transfers become unacceptable, from a business perspective, so people who don’t fit well with their first projects are Just Fucked.

Why is the on-boarding problem so severe? Big Projects, like large Java programs in general, tend to turn into shitty, ad-hoc domain-specific languages (DSLs). Greenspun’s Tenth Rule sets in, because programmers tend to compensate for underpowered tools by adding power, but in hasty ways, to the ones they have. They end up developing a terminology that no one outside of them understands. They become systems where to understand any of it requires understanding all of it. This means that people hired to modify or expand these Big Projects have to spend months understanding the existing system, whose intricacies are only known to a few people in the company, before they can accomplish anything.

The fourth problem with the big-program methodology is the titular Java Shop Politics. In a small-program development environment, engineers write programs. Plural. Several. An engineer can be judged based on whether he or she writes high-quality software that is useful to other people in the organization, and this knowledge (talent discovery) is redundant throughout the organization because the engineer is continually writing good code for a wide array of people. What this means is that technology companies can have the lightweight political environment to which they claim to aspire, in which a person’s clout is a product of (visible) contribution.

On the other hand, in a big-program shop, an engineer only works on one Project, and that project is often a full-time effort for many people. Most people in the company– especially not managers– have no idea whether an individual engineer is contributing appropriately. If John is chugging away at 5 LoC per day on Lorax, is that because he sucks, because the team failed to on-board him, or because Lorax is a badly structured project? In a small-program environment where John could establish himself, such a question could be answered. The good programmers and bad projects (and vice versa) could be objectively discovered. In a big-program world, none of that will ever be known. The person with the most clout on Lorax, usually the technical lead, gets to make that assessment. Naturally, he’s going to choose the theory that benefits him, not John. He’ll never admit that Lorax is badly designed or, worse yet, was a mistake in the first place. So under the bus John goes.

In general, it is this fog of war that creates “office politics”. When no one knows who the good and bad contributors are, there are major incentives toward social manipulation. Eventually, these manipulations take more of peoples’ emotional energy than the actual work, and the quality of the latter declines. This is not limited to technology; it’s the norm in white-collar environments. Small-program development is an antidote. Large-program development accelerates (but does not necessarily cause) the poison.

The solution to this is simple: Don’t become a Java Shop. I’m not saying that Java or the Java Virtual Machine (JVM) is evil. Far from it, I think Clojure and Scala (which run on the JVM) are excellent languages, and the JVM itself is a great piece of software. Writing an occasional (small) Java or C++ program won’t destroy a company, obviously. On the other hand, fully becoming a Java Shop (where the vast majority of development is done on large Java programs, and where success relies on understanding defective “design patterns” and flawed “best practices” instead of being able to code) will. There is no avoidance of this; politically speaking, Java Shops go straight down. The quality of engineering, predictably, follows.

The company I discussed earlier, which was one of this country’s most promising startups before this happened, did become a Java Shop, despite furious protest from existing talent. Within weeks, the politics of the organization became toxic. There was an “old team” that adhered to the Unix philosophy, and a “new team” that was all-Java. (No Scala or Clojure; Scala was flirted with and had serious managerial support at first, but they killed their Scala efforts for being “too close” to the old team.) This old/new cleavage ruptured the company, and led to an epic talent bleed– a small company, it lost several engineers in a month. Java Shop Politics had arrived, and there was no turning back.

Here’s what that looks like. First, it’s not that Java (or C++) code is inherently evil. Any decent software engineer will have a passing competency in half a dozen languages (or more) by my age, and these languages are worth knowing. Some Clojure and Scala programs require classes to be written in Java for performance. That’s fine. Where a company starts to slide is when it becomes clear that “the real code” is to be written in Java (or, as at Google, C++) and when, around the same time, big-program methodologies become the norm. This makes the company “feel” managerially simpler, because headcount and project efforts can be tracked, but it also means that the company has lost touch with the ability to assess or direct individual contribution. Worse yet, because big-program methodologies and immodular projects are usually defective, the usual result is that people with bad tastes thrive. Then the company becomes less like a software enterprise and more like a typical corporation, in which all the important decisions are made by the wrong people and decline becomes inevitable.

“Fail fast” is not an excuse for being a moron, a flake, or a scumbag.

I wrote before on technology’s ethical crisis, a behavioral devolution that’s left me rather disgusted with the society and culture of venture-funded technology startups, also known as “VC-istan”. There are a lot of problems with the venture-funded technology industry, and I only covered a few of them in that post. Barely addressed was that so much of what we do is socially worthless bubble bullshit, like Zynga– which, in my mind, only proves that a company can be taken seriously even when its name sounds like 4th-grade anatomical slang. Most of us in venture-funded technology are merely bankers, except for the distinction that we buy and sell internet ads instead of securities. This world of crappy imitations and bad ideas exists because there’s a class of entrepreneurs (who are well-liked by venture capitalists) who’ve become convinced that “the idea doesn’t matter”. That’s ridiculous! It’s good to pivot, and sometimes one has to change or abandon an idea to survive, but ideas and purposes do matter. When this fast-and-loose attitude is taken toward ideas, the result is that stupid ideas get lots of funding. That’s unpleasant to look at, but it doesn’t have the moral weight of some of VC-istan’s deeper problems, which I’ve already addressed. To pore into those, I think we have to look at a two-word good idea taken too far, and in horribly wrong directions: fail fast.

As a systems engineering term, “fail-fast” is the principle that a failing component should report failure, and stop operation, immediately, rather than attempting to continue in spite of its malfunction. The diametric opposite of this is “silent failure”, which is almost always undesirable. In software engineering, it’s generally understood that an average runtime bug is 10 times as costly as one found in the compilation process, and that a “do-the-wrong-thing” silent bug can be 10 to 1000 times more costly than one that throws a visible error at runtime. In software engineering, redundant systems are usually preferable because components can fail (and they will, for causes ranging from programming errors to hardware defects to data corruption caused by cosmic rays) without bringing the whole system down and, in these, for dysfunctional components to halt fast is usually a desirable behavior.

In the systems case, it’s important to look at what “fail” and “fast” mean. Fail means to stop operation once there is a detected possibility of erroneous behavior. Fast means to report the failure as soon as possible. Whether it’s a bug in software or a defect in a manufacturing process, it’s always astronomically cheaper to fix it earlier rather than later. The idea isn’t to glorify failure. It’s an acknowledgment that failure happens, and it’s a strategy for addressing it. Fail fast doesn’t mean “make things unreliable”. It means “be prepared for unexpected wrongness, and ready to fix it immediately”.

In VC-istan, “fail fast” is an attitude taken toward business, in which failure becomes almost a badge of honor. I believe this is intended as an antidote for the far more typical and pernicious attitude toward business failure, which is to personalize and stigmatize it, as seen in “middle America” and most of Europe. I’ll agree that I prefer the fail-fast attitude over the paralyzing risk aversion of most of the world. The reason Silicon Valley is able to generate technological innovation at a rate faster than any other place is this lack of stigma against good-faith failure. On the other hand, I find the cavalier attitude toward failure to often veer into frank irresponsibility, and that’s what I want to address.

The typical VC startup founder is rich. Without inherited connections, it takes about twelve months worth of work without a salary to produce something that VCs will even look at. (With such connections, VC mentoring comes immediately and a fundable product can be built within about half that time.) Even for the rich and well-connected, it’s dicey. VC acceptance rates are typically below 1 percent, so a lot of good ideas are being rejected, even coming from well-positioned people. Raising money is always hard, but for people who aren’t wealthy, the risk is generally intolerable: twelve months without salary and a high likelihood that it will amount to zilch. Why’s this relevant? Because rich people can afford a cavalier attitude toward failure. Losing a job just means moving vacations around. If one company dies, another can be built.

In an ideal world, everyone would be rich, by which I mean that material limits wouldn’t dominate peoples’ lives and their work in the way they do now. This would be a world of such abundance as to implicitly provide the safety associated with socialism, without the drawbacks, and in which poverty would be eliminated as thoroughly as smallpox. I believe humanity will reach a state like this, but probably not until the end of my lifetime, if not some time after I’m dead. In this “post-scarcity” world, libertarian capitalism would actually be a great system (and so it’s easy to see why out-of-touch rich people like it so much). Business failure would just be the impersonal death of bad ideas, resources would quickly be allocated to the good ones, and people would rise into and fall out of leadership positions as appropriate but could gracefully decline when not needed, rather than having to fire their help, pull their kids out of college, or move halfway across the country when this happens. If everyone were rich, libertarian capitalism would be a wonderful economic system. However, we don’t live in an ideal world. We have to make do with what we have.

In the real world, failure hurts people, and most of those people aren’t 23-year-olds with $5-million trust funds. Investors (not all of whom are rich) lose large amounts of money, and employees get fired, often without notice or severance. Careers of innocent people can be damaged. This doesn’t mean that failure is morally unacceptable. Good-faith failure must be accepted, because if failure leads to broad-based social rejection, you end up with a society where no one takes real risk and no advancement occurs. This isn’t an abstract danger. It’s something that most people see every single fucking day in the typical corporate workplace: a bland, risk-intolerant environment where people are so afraid of social rejection that people torture themselves in order to seem busy and important, but no one is taking creative risks, and real work isn’t getting done. So my attitude toward those who take risk and fail in good faith is one of empathy and, sometimes, admiration. I’ve been there. It happens to almost everyone who wants to accomplish something in this world.

My issue with “fail fast”, and the more general cavalier attitude toward business failure observed in VC-istan, is that people who espouse this mantra generally step outside the bounds of good-faith failure, responsible risk-taking, and ethical behavior. When you take millions of dollars of someone else’s money, you should try really fucking hard not to fail. It’s a basic ethical responsibility not to let others depend on you unless you will do your best not to let them down. You should put your all into the fight. If you give it your best and don’t make it, you’ve learned a lot on someone else’s dime. That’s fine. The problem with “fail fast” is that it sounds to me a lot like “give up early, when shit gets hard”. People with that attitude will never achieve anything.

Usually, the worst “fail fast” ethical transgressions are against employees rather than investors. Investors have rights. Dilute their equity in an unfair way, and a lawsuit ensues. Throw the business away recklessly, and end up in court– possibly in jail. One can’t easily fire an investor either; at the least, one has to give the money back. On the other hand, a remnant of the flat-out elitist, aristocratic mindset that we have to kill the shit out of every couple hundred years (cf. French Revolution) is the concept that investors, socially speaking, deserve to outrank employees. This is absurd and disgusting because employees are the most important actual investors, by far, in a technology company. Money investors are just putting in funds (and, in the case of VC, money that belongs to other people). They deserve basic respect of their interests for this, but it shouldn’t qualify them (as it does) to make most of the important decisions. Employees, for contrast, are investing their time, careers, creative energy, and raw effort, often for pay that is a small fraction of the value they add. Morally speaking, it means they’re putting a lot more into the venture.

I’ve seen too many sociopaths using “fail fast” rhetoric to justify their irresponsible risk-taking. One example of a fail-fast acolyte is someone in his mid-20s whom I once saw manage the technical organization of an important company. I won’t get into too many details, but it’s an ongoing and catastrophic failure, and although it’s evident to me at least (because I’ve seen this shit before) that he is personally headed toward disaster, it’s not clear whether the company will follow him down the drain. (That company is in serious danger of failing an important deliverable because of decisions he made.) I hope it doesn’t. First, he took a scorched earth policy toward the existing code, which was written under tight deadline pressure. (Despite this twerp’s claims to the contrary about the “old team”, the engineers who wrote it were excellent, and the code quality problems were a direct result of the deadline pressure.) I don’t consider that decision an unusual moral failure on his part. Give a 25-year-old programmer the authority to burn a bunch of difficult legacy code and he usually will. At that age, I probably would have done so as well. That’s one very good reason not to give snot-nosed kids the reins to important companies without close supervision. I remember being 18 and thinking I knew everything. A decade later… turns out I really didn’t. Taken too far, the “fail fast” mentality appeals to impulsive young males who enjoy waving a gun around and shooting at things they can’t see and don’t understand.

My second encounter with this person’s “fail fast” sociopathy was in a discussion of hiring strategy, in which he discussed building “30/60/90 plans” for new hires, which would entail milestones that new employees would be expected to meet. As a way of setting guidelines, this is not a bad idea. Technology workplaces are a bit too dynamic for people to actually know what a person’s priorities should be three months in advance, but it’s always good to have a default plan and baseline expectations. New hires typically come on board, in a chaotic environment, not knowing what’s expected or how to “on-board”, and a bit of structure is a useful. This little sociopath wanted to take things a bit further. He thought it would be a good idea to fire people immediately if they missed the targets. New hire takes 35 days to meet the 30-day goal? Gone, after one month. No chance to move to another part of the organization, no opportunity to improve, no notice, no severance, and it’s all made “fair” by putting all new hires on a PIP from the outset. I’m pretty sure, I’ll note, that this young twerp has never been fired himself– and my money’s on him being three to 12 months away from his first experience with it, depending on how fast he can learn that primary executive skill of shifting blame, and how long he can run it. These sorts of terrible ideas emerge when people are permitted to take irresponsible risks with others’ careers. Most of the damaging HR “innovations” companies invent (which become tomorrow’s morale-damaging bureaucratic cruft) occur not because they’re good ideas for the company, but because people within these companies want to propose wacky ideas that affect other people, in the hope that some “greater fool” in upper-management will see the half-baked concept as “visionary” and promote the person who invented it, regardless of the idea’s lack of merit. That’s how Google’s douche-tsunami (douchenami?) system of stack-ranking and “calibration scores”, for just one example, was born.

I don’t like people who are cavalier about failure when they haven’t been on the other side of it, either as an investor who lost a large sum of money, or as a laid-off or unjustly-fired employee. To put it plainly and simply: “failing fast” with other peoples’ risk is not courage. I say this as someone who has taken a lot of risks and failed a few times, who has always accepted the consequences of what he has started, and who has always done everything possible to make sure that anyone taking a risk with me knows what he or she is getting into.

I’m going to advise something altogether different from “fail fast”, because the term “fast” has chronological implications that I don’t find useful. Protracted failures driven by denial are bad, sure. I agree with that aspect of “fast”, but people should try to avoid failure if they can, rather than jumping immediately to declare defeat and move on to a sexier prospect. Fail safely or, at least, smartly. Know what the risks are, disclose them to those who are taking them, and be prepared to address failures that occur. There are cases where chronologically fast failure are appropriate, and there are times when it is not. Largely, the ethics of this come down to what risks the involved parties have agreed to take. People who invest in a startup accept the risk of losing the entire investment in a good-faith business failure, but they don’t accept the risk that the founder will just give up or do something overtly unethical with the money (bad-faith failure). Employees in startups accept the risk of losing their jobs immediately, without severance, if the company goes out of business; but if they’re misled about how much runway the company has, they’ve been wronged.

The ethics of “fail fast” depend largely on the explicit and implicit contracts surrounding failure: how failure is defined, and how it is to be handled. These are conversations people don’t like having, but they’re extremely important. Failures happen. Often these contracts are left implicit. For example, a person who joins a five-person company accepts that if he doesn’t fit well with the project (because a startup of that size only has one project) his employment must end. More on that, being a founder means that one will be (and should be) fired immediately if one doesn’t work well with the rest of the team, just as being a elected official means one accepts the risk of being fired for being unpopular. On the other hand, a person who joins a more stable, large company, does so with the expectation of risk mitigation. Specifically, people join large companies with the understanding that being a poor fit for one’s initial project doesn’t mean leaving the company. The additional robustness of career is a primary incentive for people to join huge companies. Therefore, large companies that impede internal mobility, usually under pretenses of false objectivity in the performance review process, are deeply unethical and their reputations should be tarnished gleefully and often, in order to prevent others in the future from being blown up by undisclosed risks.

The “fail fast” mantra implies that failure is hard, and that it takes a certain fortitude to look failure in the eye and accept the risk. Alone, that’s not hard. Lying down is easy. Quitting on someone else’s risk and dime is not hard. Letting people down is not hard. The hard part is communicating risks as they actually are to people before they get involved, finding people willing to take those risks, working as hard as possible not to let people down, and working even harder to help everyone recover from the loss should failure occur.

An ethical crisis in technology

Something I’ve noticed over the past few years is how outright unethical people are becoming in the technology business. I can imagine the reply. “Bad ethics in business; you mean that’s news?” Sure, people have done bad things for money for as long as there has been money. I get that. The difference that I sense is that there doesn’t seem to be much shame in being unethical. People are becoming proud of it, and a lot of our industry’s perceived leaders are openly scummy, and that’s dangerous.

An example is Mark Pincus, who prides himself on having done sleazy things to establish his career, and who moved to deprive his employees of equity by threatening to fire them if they didn’t give it back. When he was called on this, rather that admit to minority shareholder oppression, he went on a tirade about not wanting to have a “Google chef”, referring to the first cook at Google who earned over $20 million. In his mind, blue-collar workers don’t deserve to get rich when they take risks.

This is bad for startups. Equity is the only think pre-funded startups have to attract talent. These types of shenanigans will create an environment where no one is willing to work for equity. That is often the externalized cost of unethical behavior. It doesn’t hurt only the “victim”, but it harms all the honest players out there who are less trusted.

I will state that what appears in the news is only the tip of the iceberg. Here’s some shit I’ve either seen, or been credibly informed of, in the past 24 months, most of which was never in the news: no-poach agreements, attempted blacklisting of whistleblowers, a rescinded job offer based on a rumor that suggested PTSD, abuse of process within large companies, extortion of ex-employees, gross breaches of contract, frivolous lawsuits, threats of frivolous lawsuits, price fixing among venture capitalists, bait-and-switch hiring tactics, retaliatory termination, and fraudulent, no-product startups designed to embezzle angel investors. That took me about 60 seconds; two minutes more and the list would be three times as long. None of this was in finance: all tech, with most of these pertaining to reputable companies. I’m not an insider. I’m no one special. If I’m seeing these behaviors, then a lot of people are, and if a lot of people are seeing them, it means that a lot of unethical things are happening in a sector of the economy (technology) known for good behavior and a progressive mindset.

This is just the first act. This is what it looks like when the economy is doing well, as in technology, it is. The wronged move on. Their jobs may end and their equity may be stolen, but they move on to better opportunities. Those who’ve entered criminal patterns in order to keep up with expectations can still break out of them, if they do so now, without spiraling straight down. We’re not seeing the lawsuits, the disclosures of misconduct, the bitter fights and the epic crimes yet. At some point, presumably in a worse economic environment than what we have now, that will come. When it does, the results will be terrifying, because the reputation of who we are, as an industry, and what we do is at stake.

People, in the U.S., have developed a reflexive dislike for “finance” and “Wall Street”. The financial industry has certainly earned much of its negative reputation, but finance isn’t innately bad (unless one believes capitalism to be evil, which I don’t). Most of finance is just boring. I would also caution us against believing that “technology”– this brave new world of venture capital and startups and 25-year-old billionaires– is incapable of developing such a negative reputation. A few bad actors will give us all a bad name.

In finance, most of the unethical behaviors that occur have been tried so many times that laws exist to discourage them. There are problems of lax enforcement, and too often there is frank regulatory corruption, but at least there is clarity on a few basic things. One example: you don’t front-run your customers, and you will go to jail if you do. In addition to legal pressure from without, finance has imposed regulations on itself, in part, to regain its reputation. Self-regulatory organizations like the New York Stock Exchange have fired people for life over the worst crimes.

The ethical failures in technology have a different, and more intimate, character than those in finance. Financial crimes usually cause the loss of money. That’s bad. Sometimes it’s catastrophic. What makes these crashes especially newsworthy is the sheer number of people they affect. Nearly everyone was affected by the late-2000s property bubble, for example. The recent spate of ethical lapses in technology are of a more focused nature. They don’t inflict their losses on thousands of people, but they damage careers. The most common example that I’ve seen would be bait-and-switch hiring, where a person is brought on board with the promise of one type of project and given another. There is no legal recourse in this case, and there are lots of other ethical lapses that have similar effects. These activities waste the time of highly talented people in fruitless relationships, and often on pointless work.

In technology, we haven’t figured out how to regulate ourselves, and we’re risking the reputation of our industry. Too much depends on us to allow this. With the aging population, the depletion of fossil fuels, and the exigent need to move toward an environmentally sustainable economy, we’re just too important to the world for us to take a dive.

One might argue, in response to that claim, that most of what comes out of VC-istan isn’t “real technology”, and I’d agree. Venture capitalists may love “semantic sheep-throwing coupon social network” build-to-flip startups, but those don’t have much social or scientific value. For that, most of the unethical activity I’ve seen comes from the “fake technology” companies, but not all of it. Either way, few people make this distinction, and regarding them making it, I wouldn’t take that chance.

Who has the authority to address this problem? In my opinion, it’s an issue of leadership, and the leaders in technology are those who fund it: the venture capitalists. I’m not going to assert that they’re unethical, because I don’t know enough about them or their industry to make such a claim. I do, however, think they encourage a lot of unethical behavior.

What causes the ethical compromise that occurs commonly in the financial industry? My opinion is that it’s proximity to money, especially unearned money. When working for clients with $250 million in net worth, often who inherited it, people begin to feel that they deserve to get rich as well. It’s human nature. The cooks feel entitled to some of the food. Some people in that industry just take that mentality too far and begin committing crimes. I don’t think the problem with finance is that it attracts scummy people. I think it tempts them to do scummy things.

The sociology of venture-funded startups is similar. The entire funding process, with its obscene duration that is measured in months, with terms like multiple liquidation preferences and participating preferred, and with the entrepreneur expected to pay VCs’ legal fees– I am not making that up– is based on the premise that MBA-toting venture capitalists are simply Better Than You. Venture capitalists, in no uncertain terms, outrank entrepreneurs, even though the jobs are entirely different and I would argue that the entrepreneur’s job is a hundred times harder. Among entrepreneurs, there are Those Who Have Completed An Exit, and there are the rest. It’s not good to be among “the rest”; people can dismiss you as having “no track record”, which is a polite way to call someone a born loser. Among that set are Founders (of funded startups) and “founder-track” employees– proteges invited into investor meetings so they might become “Founder material” in the future– within funded startups… and then theres’s everyone else, the fools who keep the damn thing going. It seems like a meritocracy, but it’s the same social climbing bullshit found in any other industry. The meritocratic part is derived from what does when one has resources, but to get the resources one usually needs a full-time devotion to social climbing. There are exceptions, and incubators are making this situation better, but there are not that many.

Venture capitalists may not all be unethical, but they’re not ethical leaders either. They establish this lack of leadership through onerous terms, malicious collusion, and the general attitude that the entrepreneur is a desperate huckster, not a real partner. The Better Than You attitude they cop is intended to make people feel hungry, to make them want to get to the point where they actually “deserve” the company of venture capitalists, but it actually makes them act desperate. Does this lead to unethical behavior? Sometimes yes, sometimes no. When not, it still produces ethical ruin in the form of inappropriate, hasty promotions, which lead to the same kinds of behavior in the long run.

In other words, this ethics problem is not just limited to “a few bad apples”. Culpability, in my mind, goes straight to the top.

HR’s broken: if Performance Improvement Plans don’t, what does?

I wrote a bit in a previous essay, on how and why companies fire people, about why “Performance Improvement Plans” (PIPs) don’t actually have the titular effect of improving performance. Their well-understood purpose is not that, but to create “documentation” before firing someone. Why do they exist? Because companies prefer them over severance payments. Severance isn’t a legal obligation, but it’s something companies do to eliminate risks associated with firing employees. Despite what is said about “at will” employment, termination law is so complex (as it needs to be) and with so many special cases that, except in ironclad cases, the employer is at some risk of either losing the case, or of winning but damaging its reputation in the process. Perhaps more importantly, because lawsuits are expensive and time-consuming but PR attacks are cheap, severance payments exist to prevent disparagement by the employee. (Warning: never explicitly threaten to disparage a company or reveal damaging information in severance negotiation. That’s illegal. Don’t threaten legal action either, because it will get the opponent’s attorneys summoned and they are better-skilled as negotiators than the people you’ll be dealing with before you do so. Best, for a start, is to list what the company has done wrong without suggesting your course of action, whether it be a lawsuit, disparagement, or a talent raid. If you want to disparage your ex-employer should the negotiation fall through, that’s fine. Threatening to do so in the context of financial negotiation is illegal. Don’t do it.) Predictably, companies would prefer not to cut severance checks for fired employees, and to mitigate the risk that they are pursued afterward. That’s where the PIP comes in. It’s “documentation” that the employee was fired for performance reasons, intended to make him think he has no recourse.

If an employee is fired for objective, performance-based reasons, then he has no legal claim. He couldn’t do the job, which means he’s eligible for unemployment insurance but not a judgment against the employer. This is relatively easy to prove if the work is objectively measurable, as in many blue-collar jobs. However, most jurisdictions also enable an employee to seek recourse if he can establish that a lower performer was retained. If Bob is fired for producing only 135 widgets per hour (compared to a requirement of 150) while Alan, the boss’s son, keeps his job while delivering 130, then Bob can contest the termination and win. But if Bob was the only person below that standard, he can’t. Also, if Bob can establish that his low performance was caused by bad behavior from others, such as his manager, or that he was unfairly evaluated, he has a claim. This defense rarely works in objective, physical labor, but can be played pretty easily in a white-collar context (where work performance is more sensitive to emotional distress) and, even if the employer wins the lawsuit, it comes off looking bad enough that companies would prefer to settle. It is, of course, impossible to objectively define productivity or performance for white-collar work, especially because people are invariably working on totally different projects. What this means is that an “airtight” performance case for a termination is pretty much impossible to create in a white-collar environment. This is what severance contracts, which usually entail the right to represent oneself as employed, a positive reference, and enough money to cover the expected duration of the job search, are for: to give the person what’s necessary to transition to the next job, and to leave the person feeling treated well by the company. PIPs are seen as a “cheaper” way to get rid of the employee. Instead of cutting a 3-month severance check, keep him around on make-work for a month and then cold-fire him.

I’m not a lawyer, but I don’t think that a PIP does much to reduce lawsuit risk, because wrongful PIPs are just as easy to initiate as wrongful terminations. Most PIPs contain so many factual inaccuracies that I wouldn’t be surprised to learn that they weakened the employer’s case. So why do they really exist? It’s not to improve performance, because by the time the employee and manager are documenting each other’s minor mistakes in event of a lawsuit, the relationship is long past over, nor is the real purpose to strengthen the employer’s case in court. The actual purpose of the PIP is to reduce the likelihood of ever going to court by making the target feel really, really shitty.

People tend to pursue injustices when they perceive a moral superiority between themselves and an opponent: the pursuer feels right, and the opponent was wrong. On an emotional level, the purpose of severance payments is to make people feel good about the company, so people look back on their experience and think, “It’s unfortunate that it didn’t work out, but they treated me well up to the end, it didn’t hurt my savings, and I got a better job”. They won’t react, because they don’t feel wronged. The purpose of the PIP is to go the other way and make the employee feel bad about himself. That’s all. Most PIPs are so badly drawn as to be useless in court, but if the employee is made to feel like a genuine loser, he might slink away in shame without raising a challenge– either in court or in the public opinion. Regarding the latter, the PIP makes it seem as if the company “has something” on the employee that could be used against him in the future. “Don’t ask for severance and we won’t show this PIP to anyone.” That, by the way, is extortion on the part of the company, but that’s a discussion for another time.

Another function PIPs provide is that they cover up the reasoning for a termination. Some percentage of terminations are either for objective performance or ethical reasons where it’s obvious that the person had to be fired. The employee has no legal case, and would embarrass himself even to bring the matter up. Those cases are uncommon in technology, where performance can be very context-sensitive but truly incompetent people are just rare. Some other percentage of terminations occur for discrimination, or for legally-protected retaliatory reasons. Those are also rare, noting that “retaliation” in the legal context has a specific, conservatively-interpreted meaning. The vast middle (between the “performance” and “retaliatory” extremes) are what we might call political terminations. There’s some disagreement about how people should be working or how priorities should be set, and there’s possibly a personality conflict, and a person either with power or with access to those in power decides to remove a “troublemaker”. In this middling 80 to 90 percent, it’s impossible to pick apart who’s in the wrong. The employee? The manager? The team? The HR department? Possibly all of them, possibly none of them, usually some of them. Sometimes these disagreements are remediable, and sometimes they get to a point where (right or wrong) the employee must be let go. A severance payment allows the company to do this in a way that leaves most parties (except the finance department, annoyed at paying 3 months’ salary to fired employees) satisfied. The alternative is the PIP, which involves pretending the problem is an objective performance issue, and hoping the employee will believe it.

A PIP is unfair to pretty much everyone– except the finance department, which can claim it “saved money” on severance payments. As I’ve said, PIPs are pretty much final: they destroy the relationship. The PIP’d employee has to come to work for a manager who, in his mind, has just fired him. The manager has to go through the motions of this kangaroo court, and his ass is on the line if he makes any mistake that increases the firm’s legal risk (which is not hard to do) so he resents the employee in a major way. The rest of the team has to put up with a disgruntled employee who is now pretty much useless, splitting his effort between the PIP make work and his job search. In short, someone in HR or finance gets to look good by “saving” a few thousand dollars while externalizing the costs to the target’s team.

A PIP is threatening to fire someone, and threats are almost always counterproductive on either side of a negotiation. By the time a PIP is even on the table, the employee should just be fired. Same day. Write a contract that gives him a severance check in agreement not to sue or disparage the company, and let everyone move the fuck on. No CYA “documentation”. You’ve made your decision to separate. Now execute it, but do it well and do it fairly.

I’m going to step away from all this nastiness for a bit, because the vast majority of employees aren’t intentional low-performers, most managers aren’t jerks, and I’d like to believe that most companies aren’t driven by bean counters in HR suites. Let’s take a positive spin: what should a manager do if he genuinely wants to improve an employee’s performance, behavior, or impact? Although formal PIPs are toxic, the continual process of improving performance is one in which manager and employee should always have an interest, whether that employee is a 1st- or 7th- or 10th-decile performer. Who doesn’t want to become better at his job? What managers don’t want their teams to be better? Performance improvement is actually something people should be doing at all times, not just in times of crisis.

First, it’s important to get terminology right. Many technical organizations like to be “lean” or “flat”, which means that a manager has 10 to 50 reports instead of the traditional 3 to 5. If a manager has more than five reports, he can’t possibly evaluate them for performance. There isn’t a known performance problem. There’s a known impact problem. It might be, and might not be, a problem of individual performance. If it’s not, and the problem is presented as a “performance” issue, the employee is going to hate the manager’s guts for (a) not seeing what is really going on, and (b) throwing him under the bus before understanding the issue.

Managers are typically afraid to investigate the real causes of impact problems. There are two reasons for this. The first is that the cause is almost always unpleasant, once discovered. The employee might simply be unable to do the job, which means he must be fired. Not pleasant. The cause of the problem might be something like a health problem. Even more unpleasant, and it legally complicates any termination process. Most often, the cause of the problem– the “blocker”– is either a mistake made by the manager or a problem caused by someone else on the team, usually a person of high influence whom the manager likes– a “young wolf”. That’s extremely unpleasant, because it requires the manager either to admit his own mistake, or to do a lot of work to rectify the problem. For this reason, managers typically don’t want to peer under this rock until their bosses, or the HR department, force their hand. Most managers aren’t malevolent, but they’re just as lazy as anyone else involved in the grueling, unpleasant work based largely on addressing and fixing failed relationships, and they’d rather the problem just go away.

The second reason why managers rarely investigate low-impact employees is the convexity of the impact curve; in a typical organization, a 10th-decile employee might be a “10x” performer– that is, as valuable as ten average employees– while 8th decile is 3x, 6th is 1.5x, 4th is 0.5x, and 2nd is 0.25x. A manager gains a lot more by encouraging a 7th or 8th-decile performer to move up one decile than by bringing someone from the bottom into the 3rd- or 4th-decile. Of course, it’s possible that uncovering the “blocker” might move someone from the bottom into the upper-middle or even the top, but people are naturally distrustful of sudden moves. Even if removing the blocker puts the employee in the 9th or 10th-decile for personal performance, he’s extremely unlikely to do better than even the 4th-decile for impact, because his sudden change will be distrusted by the rest of the organization. Managers can’t easily mentor or sponsor people in this position either, since the rest of the team will see it as “rewarding failure” for the low-impact employee to receive disproportionate managerial attention or support. Right or wrong, most managers aren’t willing to risk their credibility in order to move someone from low impact to high.

So what should a manager do if he genuinely wants to improve an employee’s impact or performance? Let’s first address what not to do, aside from what has already been covered. First, written communication about any performance or impact issue is an absolute no-no. It’s an aggressive move, and will be taken as such. Sure, it reduces the employer’s leverage in severance negotiation. Who cares? That’s good for the finance department, but bad for the relationship between the employee and his manager, his team, and the company, and those relationships are what actually needs repair. If this is improvement process is done properly, then the severance conversation might never happen, which is what everyone should be aiming for. HR wants to cut people loose and to do so cheaply, but the manager should, at this point, still be trying not to have that happen in the first place.

Second, managers should never disparage their employees, and should defend them to others. Any concerns should be addressed one-on-one and verbally. Managers tend (surprisingly) to be insecure, because they can steer the team but don’t drive it, and because they need high credibility with their team in order to be effective. This is precarious and leaves them with less power than one might expect. On the other hand, most managers have this credibility. They should use it to keep the employee’s reputation intact so that, if they do successfully intervene with the troubled employee and bring his performance up to par, his impact can also rise to that level.

There are two kinds of low-impact employees. The first are those whose approach is ineffective, and the second are those who aren’t managing themselves properly (i.e. aren’t getting any work done). For the first, the best words to use are “I wish you were doing this differently.” It’s firm, but objective. The manager isn’t saying, “You’re a fuckup”, which is going to lead to “No, I’m not”. He’s saying, “I understand what is effective and what is not in this organization, and you’d be more effective if you did this”. Since he’s the manager, that statement should (and almost always will) be taken seriously. The second case is harder, because it’s impossible to state the problem without offending the employee. It’s hard to uncover the cause of a motivational crisis when the Rules of Work require the employee pretend there isn’t one. This requires a “Tell me what you need” discussion. It feels difficult, because it seems like there’s a power inversion. There isn’t. There’s mutuality. The employee’s job is to figure out what he needs from the manager in order to succeed at the company, to deliver if these requests are honored, and to consider finding another job (internally or externally) if they can’t be met. Unlike PIPs and ceremony, it actually works, but it’s not a rapid process. HR wants to turn “low performers” into mid-grade meat or to ship them out within half a quarter. A “Tell me what you need” discussion occurs over months. Why? Because the real causes of a low-impact employee are usually too complex to be remedied in 30 or even 90 days. For example, if the cause is a bad technical decision made from above that damages his ability to have an impact, it requires shifting the employee to a place where he’s less affected by it or can shine in spite of it. If it’s bad behavior from someone else on the team, I have to paraphrase Linus Torvalds: you now have two managerial problems. It’s fucking hard, but it’s what managers are supposed to do. It’s their job.

The goal of these discussions shouldn’t be to “improve performance” in some abstract, meaningless way. Turning an ineffective employee into a 3rd-decile nobody is wasted effort. You’ve turned someone you were about to fire into someone just effective enough to be hard to fire (without pissing others off). It’d make more sense to release him and hire someone new. So that goal makes no sense. The goal should be a process of discovery. Can this person become a major asset to this organization? If no, terminate employment, even if you really like him. Be nice about it, write a severance check, but fire and move on. If yes, proceed. How do we get there? What are the obstacles? If his reputation on his team is lost, consider a transfer. If he discovers he’d be more motivated doing a different kind of work, get him there.

This said, there are two “big picture” changes that are required to make managerial environments more stable and less prone to inadvertent toxicity and unexpected surprises. The first is that managers need to be given proper incentives. Rather than being rewarded for what they do for the company, managers are typically rewarded or punished based on the performance of their team, and their team alone. What this means is that managers have no incentive to allow outgoing transfers, which are good for the employee and the company but can be costly, in the short term, for the team. With these perverse incentives, it seems better for the manager to hit a high-potential, 3rd-decile performer with an intimidation PIP and capture the short-term “fear factor” bump (into the 4th- or 5th-decile) than it would be to let him find a role where he might hit the 8th decile or higher. Managers should receive bonuses based on the performance of outgoing transfers over the next 12 months, and these bonuses should be substantial, in order to offset the risk that managers take on when they allow reports to transfer.

The second problem is with the “lean” organizational model where a manager has 10 to 50 reports. It’s not that hierarchy is a good thing. It’s not, and a company that doesn’t have a large degree of extra-hierarchical collaboration (another process that most companies fail to reward) is doomed to failure. The problem is that conceptual hierarchy is a cognitive necessity, and a company that is going to attempt to assess individual performance must have processes that allow sane decision-making, which usually requires an organizational hierarchy. A manager with 25 reports can see who the high- and low-impact people are, but rarely has the time to assess causes on an individual basis. He has to delegate assessment of individual performance to people he trusts– his lieutenants who are usually, for lack of better terminology, brown-nosing shitheads. This is a classic “young wolves” problem.  These lieutenants rarely act in the interest of the manager or organization; on the contrary, they’ll often work aggressively to limit the impact of high-potential employees who might, in the future, become their competition. This is what makes “too nice” management fail, and it’s like the problem with right-libertarianism: a limited, hands-off government, managed poorly, allows an unchecked and more vicious corporate power to fill the vacuum. “Flat” organizations encourage unofficial hierarchies in which young wolves thrive. It’s better to have more hierarchy and get it right than to let thugs take over.

Another major problem is that the managerial role is overloaded. The manager is expected to be a mentor, a team-builder, and a boss, and those roles conflict. It would be hard to balance these obligations over a small number of reports, but with a large number, it’s impossible. Paradoxically, managers also have too much power and too little. They can make it impossible for a report to transfer, or destroy his reputation (since they always have more credibility than their reports) so they have unlimited power to make their reports’ work lives hell– in technology, this pattern is called “manager-as-SPOF”, where SPOF means “single point of failure”, meaning a potentially catastrophic weak point in a system– but they almost never have the power they actually need to get their jobs done. One example is in performance reviews. Managers are completely fucked when it comes to performance reviews over low-impact employees. Writing a negative one is just as bad as a PIP, and makes it incredibly difficult for that employee to transfer or advance, even several years after the review. Writing a good review for a low-impact employee sends the wrong message, which is that his low performance is the norm, or that the manager is inattentive. Since most employees don’t like being low-impact and would rather have management take attention toward resolving their blockers, this also breeds resentment, in the same way that incompetent teachers who compensate using grade inflation are disliked. Another example is in termination. Managers would rather terminate employees with a few months’ severance than go through the rigamarole of a PIP. It saves them the extra work and their teams the morale costs, and it has the same conclusion (the employee leaves) at, overall, less cost to the company. No sir, says HR: gotta write that PIP.

I haven’t done this matter justice, and I’ll probably have to come back to it at a later time, but I hope I’ve established not only the mechanisms by which managers might actually be able to improve the impact of their reports, but also the organizational problems that make it inevitable that there will be low-impact people, since the ultimate goal is not to “improve low performance” after there’s a problem, but to prevent low impact from happening in the first place.

Creativity and the nautilus

Creativity is one of those undeniably positive words that is reflexively associated with virtue, leadership, intelligence and industry (even though I’ve known some creative people who lack those traits). Yet, 28 years of experience has taught me that people don’t have much patience for creative people. We are like the nautilus, a strange-looking and, some would say, unattractive creature that leaves behind a beautiful shell. Most people are happy to have what we create, but would prefer never to see the marine animal that once lived inside the art. All they want to do is pick it up once left on the shore.

Creativity isn’t in-born and immutable. It’s an attribute of which most people end up using (and, in the long-term, retaining) less than 1 percent of their natural potential. It grows over time as a result of behaviors that most people find strange and potentially unpleasant. Creative people are curious, which means that they often seek information that others consider it inappropriate for them to have. They’re passionate, which means they have strong opinions and will voice them. Neither of these is well-received in the typical conservative corporate environment. Worse yet, creative people are deeply anti-authoritarian, because it’s simply not possible to become and stay creative by just following orders, not when those orders impose compromise over a substantial fraction of one’s waking time. It never has been, and it never will be.

This doesn’t mean that creativity is only about free expression and flouting convention. Creativity has divergent and convergent components. An apprentice painter’s more abstract work may seem like “paint splatter”, but there’s a divergent value in this: she’s getting a sense of what randomness and play “look like”. She might do that in January. In February, she might do a still life, or an imitation of an existing classical piece. (Despite negative connotations of the word, imitation was an essential part of an artist’s education for hundreds of years.) It’s in March that she produces art of the highest value: something original, new, and playful (divergence) that leverages truth and value discovered by her predecessors, trimmed using rules (convergence) they teach. Computationally, one can think of the divergent component as free expansion and the convergent aspect as pruning. The convergent aspect of creativity requires a lot of discipline and work. Novelists refer to this process as “killing your darlings”, because it involves removing from a narrative all of the characters or themes that an artist inserts for personal (often emotional) reasons but that add little merit to the completed work. For technology, Steve Jobs summarized it in three words: “Real artists ship”. It’s intellectually taxing, and many people can’t do it.

Convergent creativity is what our culture’s archetype of “the artist”, for the most part, misses. Youthful “experience chasing”, social climbing, extremes either of fame and wealth or of miserable, impoverished obscurity, all have some divergent value. On the other hand, that type of stock-character artist (or real Williamsburg hipster) has almost no hope of producing anything of artistic value. Divergence alone leads to mutation and insanity, not creation. This also explains why the highest-priced “modern art” is so terrible. We have a culture that worships power, money, and social access for their own sake and so the “brand-name artists” whose connections enable them to be paid for divergence (and divergence only) are treated as high priests by our supposed cultural leaders. The result of this is that terrible art sells for astronomical sums, while real artists often work in obscurity.

Most people, when they think of creative workers, whether we’re talking about writers or game designers or computer programmers, only seem to understand the divergent part. That gives them the impression that we’re a coddled bunch with easy jobs. We “get paid to have fun”. To quote Mad Men from the perspective of an “account man”, we’re “creative crybabies”. Bullshit. Creativity is rewarding and it can be a lot of fun, but it also requires an incredible amount of work, often at inconvenient hours and requiring very high levels of sustained effort. Wake up at 5:00 am with a great idea? Get to work, now. If that idea comes at 10:30 pm instead, stay up. Most of us work a thousand times harder than “executives”, the private-sector bureaucrats whose real jobs are white-collar social climbing.

As with the adiabatic cooling of an expanding gas, and the heating of a compressed one, divergence has a cooling effect while convergence is hot. For the first, free writing tends to calm the nerves and diminish anger. Improvisational art is deeply anxiolytic and invigorating. But if it is taken too far without its counterpart, divergence leads to softness and ennui. A metaphor that programmers will understand is the concept of writing a 100,000-line program on paper, or in a featureless text editor (e.g. NotePad) without an appropriate compiler or interpreter. In the absence of the exacting, brutal convergence imposed by the compiler (which rejects programs that don’t make sense) the programmer is deprived of the rapid and engaging divergence/convergence cycle on which programmers depend. Over time, the hand-written program will accumulate shortcuts and errors, as the programmer loses touch with the substrate on which the program will run. Related to divergence’s cooling effect is convergence’s heating effect. It’s very taxing to hack branches off of one’s “search tree”. It’s painful. As humans, we’re collectors and we don’t like setting things we’ve found out of view, much less discarding them. About two to three hours of this work, per day, is more than most people can reliably perform. Although extremely important, the convergent aspect of creativity is exhausting and leads to frayed nerves and difficult personalities.

Business executives’ jobs are social climbing, internal and external. Their jobs are intentionally made extremely easy to minimize the probability of social hiccups, for which tolerance is often zero. They’re paid five times as much as any person could possibly need, to eliminate the possibility of financial worry and allow them to purchase ample domestic services. They’re given personal assistants to remove pollution and stress from their communication channels. This makes sense in the context of what companies need from their executives. He only needs to have average intelligence and social skill, but he needs to sustain this level reliably, 24/7, under a wide range of unpredictable circumstances. Rare moments (“outlier events”) are where creative people earn their keep, but where executives set themselves on fire. To be an executive, one needs to be reliable. Executives may have the authority to set working hours, but they have to be on time, all the time, every day.

For a contrast, the creative person often “needs a walk” at 3:00 in the afternoon and has no qualms about leaving work for an hour to enjoy an autumn day. After all, he does his best work at 7:00 am (or 11:30 pm) anyway. Creative people earn their keep in their best hours, and are wasting time in the mediocre ones. From an executive perspective, however, this behavior is undesirable. Why’d he up-and-leave at 3:00? There could have been a crisis! What risk! By definition, creative people are full of risk, socially gauche on account of their tendency to mentally exhaust themselves in their desire to produce something great, and therefore not reliable. Clearly “not executive material”, right? I disagree, because I think the discrepancy is more of role than of constitution. Most creative people could become tight-laced executives if they needed to do so. Creatively inclined entrepreneurs often do find themselves in this role, and many play it well. There is, however, an either/or dynamic going on. It’s necessary to choose one role and play it well, rather than attempting both and being very likely to fail. Someone who is exhausted on a daily basis by the demands of high-level creativity (and I don’t mean to say that creative work isn’t rewarding; it is, but it’s hard) can’t be a reliable people-pleaser in the same week, any more than a clay jar that just came out of a 1000-degree oven shouldn’t be used, until it cools, to store ice cream.

For this reason, business executives tend to look at their most creative underlings and see them as “not executive material”. Well, I agree this far: not now. That perception alone wouldn’t be such a problem, except for the fact that people tend to judge others’ merit on a “just like me” basis, especially as they acquire power (ego confirmation). They tend to be blind to the possibility that people very different from them can be just as valuable. Executives tend, for example, to favor mediocre programmers who don’t exert themselves at work, and who therefore retain enough mental energy for social polish. The technical mediocrity that sets in over most software companies is the long-term result of this. It’s not that companies go out of their way to “promote idiots”; it’s that they are designed to favor reliable career politicians over “risky” hard workers whose average performance is much higher, but whose lows are lower.

Would a creative person make a good business executive? I believe so, but with a specific variety of mentoring. Creative people given management positions generally find themselves still wanting to do the non-managerial work they did before the promotion. The problem here is that the manager can easily end up hogging the “fun work” and giving the slag to his subordinates. This will make him despised by those below him, and his team will be ineffective. That’s the major obstacle. Creative people tapped for management need to be given a specific ultimatum: you can go back to [software programming | game design | writing] full-time in 6 months and keep the pay increase and job title, but for now your only job is leadership and the success of your team. Good managers often find themselves taking on the least desired work in order to keep the underlings engaged in their work. It’s unpleasant, but leadership isn’t always fun.

I’ll go further and say that creative discipline is good practice for business leadership. Business leaders can’t afford to be creative at the fringe like artists or computer programmers, because the always-on, reliable social acumen they require precludes this kind of intellectual exhaustion, but the processes they need to manage are like an externalized version of the self-management that creative people need to develop in order to succeed. They must to encourage their subordinates to grow and test new ideas (divergence) but they also need to direct efforts toward useful deliverables (convergence). Many managers who fail do so because they only understand one half of this cycle. Those who focus only on convergence become the micromanagers who complain that their subordinates “can never do anything right” and who rule by intimidation. They fail because the only people who stay with them are those who actually can’t do anything right. The too-nice bosses who focus solely on divergence will either (a) fail to deliver, and be moved aside, or (b) are unprepared when young-wolf conflicts tear apart their organization. So, although the creative personality appears unsuited for managerial roles in large organizations, I’d argue that creative experience and discipline are highly valuable, so long as the manager is prepared to temporarily sacrifice the creative lifestyle hor professional purposes.

The results of creativity are highly valued, although sometimes only after an exhausting campaign of persuasion. The creative process is deeply misunderstood, creative roles are targets of misplaced envy because of their perceived “easiness”, and the people who have the most creativity (the top 0.1%) are rarely well-liked. People want our ideas, but not us and our “attitudes”. The shells, not the gnarly creatures. I invoke Paul Graham’s explanation for why “nerds” are rarely popular in high school. His explanation is that most high schoolers make a full-time job of being popular, while the nerds are too busy learning how to do hard things, such as programming computers or playing musical instruments. We see the same thing (the high-school social dynamic) in the slow-motion cultural and moral terrorist attack that we call “Corporate America”. People can invest themselves into working really hard in pursuit of the chance (and it’s far from a certainty) of producing something truly great. Or they can throw themselves wholesale into office politics and grab the gold. The world is run by those who chose the latter, and the cultural, artistic and moral impoverishment of our society is a direct result of that.

Breaking Men: bookends of the American Era.

WARNING: spoilers (Mad Men, Breaking Bad) ahead.

AMC has developed, over the past five years, two television shows of extremely high quality that seem unrelated: Mad Men and Breaking Bad. The first centers on Madison Avenue in the 1960s, and it depicts the ascendancy of the “WASP” upper-middle-class into a makeshift upper class, following the power vacuum created by the evisceration of the old elite in the 1930s and ’40s, while they are literally chased up those stairs by social upheaval and their own obsolescence. The lions of Mad Men become rich, aggressive businesspeople because they are getting old and there is nothing else for them to do. Although it can be cynical and morose, the show is eros-driven, focused on an elite sector of the economy responsible for literally building the American middle-class culture, with still-new and exciting products like air travel, automobiles, and (well…) mass-produced, cheap tobacco cigarettes. What pervades the show is that these people should be happy: they’re rich and important people in the most powerful country on earth in what seems like the most exciting time in history, with passenger air travel a reality and space tourism “inevitable” within 25 years. (About that…)

An interesting exercise regarding Mad Men is to think of its environment and its characters within a modern context. Most young professionals (outside of technology, where their jobs hardly existed that long ago) agree that the “golden age” was decades ago, when hours were short and, though it was harder to get in the door of top firms, career advancement once in them was virtually guaranteed (for a man, at least). Peggy and Don, who are frequently in the office at 7 o’clock, seem remarkably dedicated in comparison to their drunken colleagues. Whether this depiction of young professional life in the 1960s as a “golden age” is accurate is a matter of much controversy, and Mad Men takes both sides. On one hand, Pete Campbell, under 30 and not especially well-compensated at the time, is able to buy (!) a house in Manhattan for $32,000 ($230,000 in today’s dollars). On the other, the work environment is full of fear and authority (Pete is almost fired for pitching his own idea to a client) and social anxieties are so severe that self-medication with alcohol and cigarettes is a necessity. A bit of useful context is that it wasn’t at all normal or socially acceptable for professionals to leave jobs, as it is now, except for obvious promotions like first-day partnership in the new firm. So getting fired or demoted unto departure wasn’t the annoyingly unexpected, unpaid, 6-week vacation it is today: it could spell long-term ruin (cf. Duck Phillips). Every character in Mad Men lives on the brink of career disaster, and they know it.

If there is a more present comparison for the advertising world of the 1960s, it would be investment banking between 1980 and 2008: a lucrative, selective, cutthroat, and often morally compromised industry whose cachet entitled it to the middle-third of Ivy League students– people who were clearly capable, but not so capable as to be walking authority conflicts. On the same token, advertising was far more creative an industry (at least at junior levels, and within the confines of the law) than investment banking could ever be. Also, in the 1960s Silicon Valley wasn’t on the map, and air travel was still prohibitively expensive for many people, so the creative and entrepreneurial East Coasters who might be drawn to California in 2009 would have, instead, been drawn into advertising in New York. This created a very different mix than the cultural homogeneity of modern investment banking. In 2012, these people wouldn’t have been anywhere close to each other. Pushing them forward fifty years, Peggy would be an elite graphic artist with some serious coding chops, but living on a very moderate income and an hour away from Manhattan by train. Pete would be a pudgy private equity associate, far sleazier and less charming than his 1960s counterpart. Roger Stirling probably would have gone into law and, despite being utterly mediocre, attended Harvard Law, entered a white-shoe (“biglaw”) firm, and made partner based on his family connections. Harry Crane would be in Los Angeles, but wise enough to the times not to enter the entertainment industry proper; he’d be third co-founder (5% equity holding) of a “disruptive” internet startup about to become a thorn in MTV’s side. These four would have nothing to do with each other and probably never meet at all. Joan? It’s hard to say where she’d be or what she’d be doing, but she’d be good at it. Don is even harder to place, being a zeitgeist in the true sense of the word. His character wouldn’t exist if shifted 10 years in either direction. He’s a pure product of the Great Depression, the Korean War, and the 1950s-60s ascendancy of the U.S. middle class. Dick Whitman would have taken an entirely different path. My assessment of how he would rise (into venture capital, rather than advertising, with the first half of his career established in the developing world in the 1990s) I will delay to another post, needing in this one to dedicate time to a seemingly opposite show: Breaking Bad. 

Mad Men opens the American Era on the coast where the sun rises, and Breaking Bad closes it in the rocky, red desert where the sun almost sets. Walter White is, in many ways, the opposite of Don Draper: fifteen years older at the story’s onset, but three decades older in spirit, a failed chemist who squandered his genius, a schoolteacher eventually fired for misconduct, and a man who turns to vicious crime out of a deep hatred (rather than a manipulative and cynical dismissal, like Draper harbors) of society. Draper is might be an allusion to the cloak he’s covered over himself; White begins the show naked (almost literally). He’s a feeble, desperate man who has wasted his genius (for reasons left unclear, but that seem connected to his massive ego and a callously discarded relationship) to become a mediocre teacher in a mediocre school who works at a car wash on his weekends to support a family. In the first episode, he gets sick with a cancer that his shitty health insurance won’t cover. Out of financial desperation, he teams up with one of his failed students to cook methamphetamine.

Draper seems to glide into advertising, almost by accident. Dick Whitman didn’t assume Draper’s identity because he wanted to become an ad magnate; he did it to escape a war when he found the effort to be pointless. He rose with the rising tide of the American middle class, and was wily enough to come up a bit faster than the rest. On the other hand, Walter White’s simultaneous ascendancy (into the drug underworld) and free-fall (into moral depravity) occur by brute force, although only some of the force is his. The world is collapsing, and he becomes somewhat weightless by falling with it, but in full awareness of the thud at the bottom. That Walter will be dead or ruined and humiliated by the end of the fifth and final season seems obvious at this point; the open question is how far down he will go (morally speaking) and whether he will gain a tragic recognition of the misery he has inflicted upon his family and the world.

The catalyst for Walter White’s turn to crime is a diagnosis of lung cancer, giving him about a year to live. It may be a stretch to connect this with Mad Men, but one of their primary products is Lucky Strike cigarettes, featured prominently in the first episode (“Smoke Gets In Your Eyes”) and throughout the season. Mad Men features an optimistic, future-loving backdrop of industrial ascent and capitalistic triumph. Breaking Bad‘s backdrop is of industrial waste, wreckage, pollution, and toxicity. Most obvious is Walter’s product, which is an artistically pure form of one of humanity’s most poisonous products– crystal meth, nicknamed “Blue Sky” because of its extremely high quality and blue color. Drug kingpin Gus Fring hides behind a fried-foods chain arguably responsible (in small part) for American obesity, while Walter’s megalab resides in a squalid laundry outfit. Industrial capitalism’s messes are everywhere. Walter’s cancer illustrates the invasiveness of toxicity: the damage lives within the protagonist’s body, and threatens to kill him at any time.

Connecting Breaking Bad to the demise of the American middle class in general is relatively straightforward. Almost all of the major causes of American decline (and the collapse of its middle class) are featured, albeit most indirectly, in this show. The international, big-picture causes of American decline are (a) international violence, (b) the coddling of our parasitic upper class (resulting in, among other things, irresponsible deficit spending) and (c) our reliance on polluting, increasingly scarce, 20th-century energy sources, of which Breaking Bad features two of these three. The first is featured prominently, in the never-ending and international cycles of violence involving Gus Fring, the Mexican Cartel, Tuco, “the Cousins”, Hector, and Hank. The second of these decline factors (parasitic upper class) is shown indirectly– through Walter White’s turn to parasitism, the callous sleaze of Saul Goodman, the two-faced upper-middle-class respectability of Gus Fring, the illegal financing behind Skyler’s “American Dream” of owning a business, and the depressing failure of small business-owner Ted Beneke and his industrial-era enterprise.

Those are the big-picture causes of American decline, which aren’t terribly relevant to Breaking Bad and its laser focus on one man. The finer-grained and more individual causes of the American middle class are the “Satanic trinity”: education costs, housing, and healthcare. Each makes an appearance in Breaking Bad, and healthcare most prominently, with Walter being forced into crime by uncovered expenses associated with his cancer. Albuquerque did not experience the mostly-coastal housing bubble, so housing makes a passive cameo in the destruction of it: Ted Beneke is killed by his house, while Walter and Jesse’s hideout (a home Jesse would inherit if he were better behaved) is ruined it a rather morbid way– a body is to be disposed-of using acid, and Jesse uses the bathtub instead of acid-resistant plastic, destroying the tub, bathroom, floor and house in a blow-down of liquefied human remains. The third of these, education, is only featured at the show’s outset: White is a brilliant but uninspired and miserable educator, and Jesse is a failed ex-student of his. Education cannot be featured further in Breaking Bad because education focuses on the future, and all its characters reliably have is the present. This extreme and desperate present-focus is also what makes Walter’s family (aside from his clear desire to protect them) deeply uninteresting (to the viewer). Walter’s son is disabled but of high intelligence and possibly has an interesting future, but for now he’s just an expensive and typically difficult high-school student. His daughter is an infant, and Walter will almost certainly not see her reach adulthood. Walter may wish to give them a future, but for now his focus is only on immediate survival.

If Breaking Bad is about the decline of the American middle-class, it’s also about poison and the impossibility of containing it. Walter White begins the show with cancer– a parasitic clump of useless cells that kills the host by (among other things) releasing poison into the body. Realizing his life’s failure, and that he’ll be leaving his family with nothing, he begins his life of crime, which is also the only way he can afford treatment for his illness. Walter beats the cancer (for now, as of the end of Season 4) but in doing so he becomes the cancer. He’s fired from his job as a schoolteacher, signifying the end of his useful social function. He mutates into a useless “cell” and begins releasing poison, in enormous doses measured in hundreds of pounds per week of crystal meth, into society. What Breaking Bad also shows us is that toxicity can never be contained, at least not permanently. The acid intended to dissolve a corpse destroys a home. Hamlet, the notorious Shakespearean antihero, only killed eight. By the end of Season 4, Walter is indirectly responsible for a mid-air plane crash (that would have been national news if real) over a major city that killed 168 people, he has cost his brother-in-law his career and physical function, he has murdered several people, and he has placed his family directly in danger.

Mad Men and Breaking Bad may be entirely unconnected in their origins. These dramas exist in entirely different worlds, fifty years, half a continent, and at least one social class apart. The shows appear to have nothing to do with each other, aside from extrinsic aspects such as the cable channel (AMC) that distributes them. In fact, they’re connected only by the broad-but-thin story arc of American ascendancy, calamity, and decline.

There are two riddles posed in quick succession in Godel, Escher, Bach. The first is to find an eight-letter word whose middle four letters are “ADAC”. The second is to find an eight-letter word whose first two letters, and its last two, are “HE”. Presented separately, both of these are very hard. I can only think of one word that solves each. Presented together, the riddle becomes much easier. I have similar thoughts about the unspoken (and quite possibly unrealized by the shows’ creators) connection between these two television shows. These shows are only companions if one knows the rest of the story, which is that a historical incarnation of the American nation (20th-century, middle-class) was born in the time of one and died in that of the other. Otherwise, they could have been set in entirely different worlds.

As Mad Men advances, the optimism of the era becomes clear. An enormous rocket ship is launching, and the characters’ anxiety comes from the fear that they might not be (or might not deserve to be) on it. What they foresee least of all is what will actually happen. Catastrophic urban decay in the 1970s and ’80s, especially in New York? Yeah right, that’s impossible. Too much money in these cities. Investment banking (a bland WASP safety career) eclipsing advertising as “the” coveted (and far slimier) “prestige” industry? Not possible; those guys are morons. The re-emergence of America’s defeated, obsolete upper class in the 1980s, symbolized by the election of a third-rate, once-leftist actor to the presidency? Impossible; all politics is center-left (cf. Eisenhower, Kennedy, Nixon) these days. The rapid transition of the authoritarian, leftist, communist assholes into authoritarian, right-wing “neoconservatives” who’d get us into unwinnable wars in the Middle East? Just ridiculous. An emergent and militaristic American upper class representing more of a threat to national and world security than the Soviet Union, which will implode in the late 1980s? Insane concept. The world in which the Mad Men live is shaking, and the perception that it’s about to be destroyed is very real, but no one can envision how it will be destroyed, or why. None can foresee the coming misery or spot its sources; all eyes are on that rocket ship. In Breaking Bad, we see the wreckage after that rocket comes back to earth five decades later, slamming into the cold New Mexico desert, and annihilating the middle (class, and geographical center) of an entire nation when it lands.

Breaking Bad is more personal and ahistoric than Mad Men. Walter White doesn’t care about “America” or the middle class or what is going to happen to either, because he’s not trying to exploit macroscopic social trends; he is just trying to survive, and he exploits human weakness (an ahistorical source) because it’s there. Bad‘s backdrop is the decay of “Middle America”, just as Requiem for a Dream is set against the calamitous end of the New York middle class, but it’s a story that could be told in any time. Its relevance is maximized by its currency, and the involvement of health insurance (instead of say, gambling debt) as a proximate cause of Walter White’s turn to crime suggests the current era and an external locus of original fault, but the personal and tragic elements of Walter’s story could be given another backdrop.

Breaking Bad does, however, belong in a technological society such as ours just as Mad Men belongs in a time of colored-pencil creativity and innovation. Both Mad Men and Breaking Bad grapple with morality, but the latter does so more fiercely. Mad Men illustrates the application of mediocre artistic talent to a legitimate purpose of low social value, in the pursuit of vain narcissism and reckless ambition. Breaking Bad features the application of extreme scientific talent to outright evil, in the pursuit of mere survival amid a world that is collapsing both on a personal (Walter’s cancer) and national (health “insurance”) scale.

What do these television shows tell us about ourselves and our history? First of all, Mad Men is set amid the birth of an American society, but not the first one. The American nation (as a people, united by ideas, though those ideas have evolved immensely over time) has existed for over three hundred years, and been remarkably flexible and mostly progressive (the recent turn toward political corruption, corporate coddling, millenarian religious superstition, xenophobia, and even misogyny by the conservative movement is a blip, taking a historical perspective). Just as “America” (as a nation) existed long before there was a Madison Avenue, it will persist long after the flaming wreckage of “health insurance” (i.e. the business model of taking money from the well and robbing them when sick, because sick people don’t fight thieves but the well have the money) and methamphetamine and the storm of underemployed talent (Walter White, the failed chemistry genius; unemployed Ph.Ds in #Occupy) pass, as these calamities inevitably will. Mad Men is anxiously erotic while Breaking Bad is fatally thanatoptic, representing the birth and death of just one society that will exist on this soil, but one must keep in mind that these are bookends merely of a society, not the first or last one that will exist in this nation. Walter White will probably die in his 50s, but his daughter may see a better world.

An interesting question is, “What’s between these shows?” Fifty years pass between two television dramas that, aside from historical alignment, have nothing to do with each other and are, in many ways, opposites. What would a 1980s counterpart feature? The best answer that I can come up with is that none should exist. Spring and autumn are the beautiful, unpredictable, gone-too-fast seasons that remind us of our mortality. Mad Men is set in May, and there are a few great days in it, but the May it gets is mostly the sickly humid kind where each day is either damp and rainy or oppressively hot, and in which February’s chill (the Great Depression, the wars, Dick Whitman’s miserable childhood) remains in the elders’ bones and will be there forever. Breaking Bad occurs in November, but not the beautiful, temperate red-leaf kind associated with northeastern forests; this autumn is set in a featureless desert where it’s cold and overcast but will never rain. With that said, to feature this American era’s dog days (the 1980s) seems artistically irresponsible, largely because it was a time of very little historical value. That era, that feeble summer in which America’s old parasitic elite (whose slumber made the middle-class good times of Mad Men possible, who used lingering racist sentiment to re-establish themselves in the 1970s and ’80s, and whose metastasis damaged that middle class and made the misery of Breaking Bad inevitable) re-asserted itself, doesn’t deserve the honor. Just my opinion, of course, and I was born in that time so I cannot say nothing of value came from it. Still, to put a forty-year asterisk between these two eras seems highly appropriate.

If nothing else, no one wants to see the point where the rocket, badly launched and at a speed below escape velocity, reaches its zenith and begins careening toward Earth.

Talent has no manager

For a store clerk, his “manager” is a boss: someone who can fire the clerk. For a contrast, an actor’s “manager” works for the actor and can be fired by him. In one context, the manager holds power; in the other, he’s a subordinate. This isn’t a misused word, so much as the overloading comes from the shifting application of terms. In both cases the person is accurately described as a “manager”, but the relationships are utterly different– opposite, in fact. Why is this? It requires analysis of what it means to be a manager.

A manager is a person entrusted with decisions related to a resource that its owner cannot as easily handle. A hotel’s manager operates the hotel day-to-day; if the owner is a different person, he passively reaps the benefits. For an entertainment personality’s manager, the asset being managed is the person’s reputation and career. In both cases, if the owner of the asset decides that the manager is doing a poor job, the manager is replaced. Managers work for owners. That much is clear. In the context of a low-level employee like a store-clerk, the clerk doesn’t really own anything of value. His labor is replaceable. Although his supervisor is introduced as “his manager”, the reality is that this person is a manager for the store. This person is the store’s manager, and his boss.

The word “boss” (from the Dutch) replaced “master” in the early stages of the Industrial Revolution, because of the latter’s association with chattel slavery. In accord with the euphemism treadmill, “boss” eventually went out of favor as well, replaced by “supervisor”, which was replaced in turn by “manager”. Having a personal manager sounds a lot better than having a boss, but the latter is a more accurate and better term. It may be a blunt word, but it works well for the purpose.

The job of the boss is to represent the interests of the company, not employee. He or she cannot be expected to serve two masters, just as it would be inappropriate for an attorney to represent both sides of a lawsuit. The result of this is that employees often feel shafted when their “managers” fail to act as their advocates, instead preferring the company’s interests (or, at least, what the manager represents as the corporate interest) over the employee’s own. They shouldn’t feel this way. The boss is doing his job: the one he gets paid for. What’s unfair to the employee is not that his boss prefers the company’s interests over his, but that the employee has no advocate (except himself) with any power. Full responsibility for managing his talent falls on the employee.

Who is talent’s advocate? Generally, there’s none. Talent alone, one might argue, is not very valuable: experience, reputation, and relationships are usually required to unlock it. Because of this disadvantaged initial position, the person with the talent is expected to advocate for himself. Just as it’s dangerous to represent oneself in a court of law, it can be hard to negotiate on one’s own behalf when it comes to career matters. It helps to have an advocate who isn’t risking his personal relationships and reputation in the career process. So a lot of people don’t bother. Most people are underpaid by 10 to 50 percent because they are uncomfortable negotiating better compensation. Their bosses aren’t being evil; these people simply have no advocate and fail to represent themselves. For that, I think compensation is an arena in which employees are actually more fairly treated than in intangibles. Companies can’t legally renege on promised compensation, and basic negotiating skills are often all it takes for to get a fair shake there, but they can (and frequently do) use bait-and-switch tactics to lure the best people with promises of more interesting projects than what those people actually end up working on. This is a common way for companies to mislead employees into working for them, protected by the fact that no one wants a 5-month job on his CV.

In the workplace, talent is of high long-term importance. A company that can’t retain talent will face a creeping devaluation of its prestige, mission, and ultimately, its ability to succeed as a business. For this reason, there are a few progressive managers who advocate on the behalf of talent, at least in the abstract, because they know it to be important to the general interest of the company as much as it is for talented subordinates. This is admirable, but it should be considered an “extracurricular” effort, as it’s one that these managers take on at their own risk. When these efforts fail to show short-term (one quarter) results, the jobs of those who pushed for them end up on the line.

The reality is that this progressive attitude is quite rare. Most managers (who themselves lack advocates, except themselves) are just as worried about keeping their jobs as the people they manage, and aren’t comfortable advocating for interests other than those that they’re required to represent. Companies give lip service to “mentorship” and career development, but often these are just ad copy, not real commitments. What looks like a progressive company is usually an adept marketing department. Moreover, most workplace perks are pure vanity. “Catered lunches” are a nice benefit worth a few thousand dollars per year, largely provided to reduce lunch times and portions (people who eat out are served large portions and become measurably less productive for two hours). That’s not a bad thing, but it’s not given out of altruism. Moreover, perks like an in-office XBox or foosball table are just clumsily-applied band-aids. Real professionals go to work for the work, not the diversions.

As I said, the boss cannot (even if he’d so desire) advocate for subordinate talent because this would cause a conflict of interest between his professional duty to the company’s owners (or their proxies, who are his managers) and this ancillary role. It is also difficult, in an “lean” (euphemism for “we overwork managers”) environment where it’s typical for a manager to have 15 to 20 reports, for the manager to represent the interests of all the people under him. In practice, these “flat” organizations lead to necessary favoritism imposed by the clogged communication channels, while bosses who take “proteges” usually find that their disfavored subordinates decline in productivity and loyalty, which reduces the team’s performance on the whole. The result is that the manager must be disinterested and impersonal with all reports , so career advancement through typical channels is difficult if not impossible. “Extra-hierarchical” work (collaboration with people outside of one’s reporting structure) can be far more effective, because people tend to favor those who help them out but aren’t required to do so, but this effort also makes many managers feel threatened (it seems disloyal, and creates the appearance of someone attempting to engineer a transfer, and managers whose best reports are transferring lose face with their bosses).

If talent has no advocate, does this mean that the interests of talent are ignored? No, but they’re addressed in an often ineffective, far-too-late, way. A talented person’s best move, in 90 percent of organizations, is to find another job in another company. Of course, people are free to do this, and often should, but constant churn is bad for the organization, and leads to a long-term arrangement in which the needs and desires of talent are ignored: if employees are going to leave after 6 months, why invest in them? Alternatively, a talent revolt is often manifest in reduced productivity, which reduces talent’s leverage in negotiation and leads an organization to conclude that talented people are “troublemakers” and that hiring the best people isn’t worth it in the first place.

The position of talent is especially tenuous because it’s a dangerous asset to hold. If every thousand dollars in cash caused increased a person’s risk of mental illness and interpersonal failure by 0.01 percent just by virtue of existing, those who might be billionaires would either give the shit away or burn it. Of course, this isn’t the case. Tangible financial assets– real estate, wealth, ownership in productive enterprises– are largely inert in terms of “mana burn” (the tendency to inflict harm if unused). They are at constant risk of being diminished on the market, and this may be a source of anxiety for some people, but the only thing they can lose is their own value. Talent, on the other hand, becomes extremely detrimental if unused. A millionaire “trust fund kid” working jobs below his means (as an underpaid arts worker in Williamsburg, when his father could easily get him a “boring” but cushy and lucrative position as a junior executive) is not going to be especially unhappy working jobs that are “below him”, especially because the situation can be improved at any time. On the other hand, a person of high talent trapped in a mediocre career will only fall farther. Perversely, although it’s easier to find an advocate or manager for a building or a business, talent needs it more.

The role of “talent advocate”, I believe, is unfulfilled. A boss cannot fulfill this role without entering into a conflict of interest that endangers his career. Companies’ HR departments, I believe, are useless toward this purpose as well. HR has an “eros” (hiring and advancement) and a “thanatos” (firing and risk mitigation) component. The first of these sub-departments works for the company’s management: often they mislead people into joining teams or companies with undeliverable promises of career advancement and work quality, not because they are malicious, but because they do not have the resources (or the duty) to investigate promises made by the managers for whom they work. An in-house recruiter can’t be expected to know that a position being advertised as “a Clojure job” is 90% Java. The second half of HR works for the firm’s attorneys, finance department, and public relations office, and its purpose is (a) to encourage failing employees to leave the company before formal termination, (b) to prevent disgruntled or terminated employees from suing or disparaging the company in the future. As for the advancement of talented people already in the company, managers are trusted (not always wisely) to handle this on their own. This leaves nothing in a company’s HR department that can advocate for talent. It would, arguably, go against their professional duty for them to do so.

Talent needs an advocate independent of any specific company, since its best move is often to leave a disloyal or detrimental company outright. I believe that requirement of independence is quite clear, since companies’ obligations are to shareholders only and managers’ obligations are solely to their companies. (That most middle managers, in practice, place their career interests above both those of their subordinates and of their companies is an issue I will not address for now.) Independent recruiters, one thinks, might fulfill this role. Do they? My experience has been that I do better as my own advocate than when using a recruiter. As recruiters collect a percentage of a first-year salary, they aren’t incented to act in the employee’s or even the employer’s long term interests. They are paid for putting people in roles that last at least 12 months, but not for looking out for the employee’s career interests (which may involve a 10-year career at one company, or it might involve jumping ship almost immediately). Of course, there are good recruiters out there who truly value the long-term interests of the people they place; it’s just that my memory (and, to be fair, I haven’t used one since I was a 23-year-old nobody) is that there are far more ineffective or just plain bad ones, focused on quantity in job placement rather than quality. It’s not surprising for it to be this way, since job quality (holding a person’s level of skill constant) is only loosely correlated to compensation, based on which recruiters are paid. Since it’s companies that write recruiters’ checks, it shouldn’t be surprising where their alliances lie.

Talent may be more valuable than financial resources, but it’s harder to discover and it’s far more illiquid. A company can write a $25,000 check to a recruiter, while a talented person can’t easily pay the recruiter with “$25,000 worth” of talent. Financial assets can be sliced into pieces of any desired size that are useful to anyone, so recruiters can be paid with those. Talent can’t. A recruiter cannot feed his family with 100 hours’ worth of server software. (“Tonight, we’re having fried Scala with NoSQL for dessert.”)

A possible improvement would be for recruiters to be compensated based on the “delta”, or the amount by which they improve their clients’ salaries. This would be like the pay-for-performance model by which hedge fund managers are compensated: a small percentage of assets (usually 2%) and a larger percentage of profits (often 20%). In other words, instead of collecting a flat percentage of the first year salary (15%) the recruiter could be compensated based on the hire’s long-term performance. This might give recruiters a long-term incentive to place people in positions where they are likely to succeed for the long term. Would it encourage recruiters to fill the badly-needed role of talent advocate? I’m not sure. It might just incent recruiters to find high-paying but awful jobs for their clients.

One of the difficulties associated with the talent-advocate role is that it requires the ability to assess talent. Having a talent is generally a necessary, but not sufficient, condition for being able to detect it in others. What this means is that the best talent advocates are going to be people who, themselves, have those skills and abilities. Since currency of technical skills is highly relevant, it’s best that they keep their skills up-to-date as well. Talent advocates, in other words, need to have the talent they intend to represent in order to understand what people with that kind of creativity (a) are and are not capable of, and (b) need from an employer to be motivated and successful. This requirement that the talent advocate be involved in the work for which he advocates makes a full-time recruiting effort unlikely, but without a full-time effort, it’s unlikely that the talent advocate can acquire the connections (to employers) that are necessary to place people in the best positions. In short, this is a very hard role to fill. I can’t see an easy solution.

For the time being, talent must be its own advocate and its own “manager”. This leaves us with what we already know.

Emerging from the silence

I’ve been on Hacker News a lot in the past few weeks, and I’ve shared some personal details about companies at which I’ve worked and what I’ve seen them do. I don’t want to talk any more about that. Turning over the past is, in many cases, nothing but an excuse for neglecting the future. It can very easily become a trap. The desire to “get a firm handle” on the past, to assess the causes of social harms and unexpected evils, can be very strong, but it’s ultimately useless. Those people mean nothing now, so discard them from your mind.

What I find more interesting is the degree to which people will protect unethical behavior in the workplace, as if it were a common affair for people to damage careers, other teams, and even entire companies for their own personal benefit. As if it were somehow OK. As if it weren’t even worth talking about when people do obnoxious, harmful, and extremely costly things for their own short-term, personal benefit. As if “office politics” were an invariable fact of life. Perhaps this latter claim is true, but there are major differences in degree. A debate over whether a program’s interface should represent IDs as Longs (64-bit integers) or Strings can be “politics”, but these usually come from well-intentioned disagreements in which both sides are pursuing what they believe to be the project’s (or company’s) benefit. This is orders of magnitude less severe than the backstabbing and outright criminality that people, thinking they’re doing nothing wrong, will engage in when they’re at work.

I try to be a Buddhist. I say “try to be” because I’ve been failing for thousands of years and the process of getting better at it is slow. That’s why I’m still here, dealing with all sorts of weird karmic echoes. I also say “try to be” because I find religious identification to be quite shallow, anyway. In many past lives, I’ve had no religious beliefs at all, expecting death to be the end of me. It wasn’t. I’m here. This is getting divergent, so let me put it in two words: karma’s real. Ethics matter. There is no separation between “work” and “life”. People who would not even think of stealing a steak from the grocery store have no problem with actions that can cause unjust terminations, team-wide or even whole-company underperformance, and general unnecessary strife. How can this be accepted as consistent? It can’t be. It’s not. Wrong is wrong, regardless of how much a person is being paid.

I was brought up with a great work ethic, in all senses of the word. The consequence of this is that I’m an extremely hard working person, and I don’t have much patience for the white-collar social climbing we call “office politics”. It’s useless and evil. I find it disgusting to the point of mysophobic reaction. It has nothing to do with work or the creation of value. Why does it persist? Because the good don’t fight. The good just work. Heads down, they create value, while the bad people get into positions of allocative power in order to capture and distribute it. Most people are fatalistic about this being the “natural order” of the world. Bullshit. Throughout most of human history, murdering one’s brother to improve one’s social or economic status was the “natural order” of the world. Now it often leads to lifetime imprisonment. It may not seem so, but the ethics of our species is improving, although progress feels unbearably slow.

Recently, there’s been a spate of people exposing unethical behaviors at their companies. Greg Smith outed the ethical bankruptcy he observed at Goldman Sachs, while Reginald Braithwaite (Raganwald) issued a fictional resignation letter on the onerous and increasingly common practice of employers requiring Facebook walk-throughs as a condition of employment– a gross and possibly illegal invasion of privacy. Raganwald’s letter wasn’t just about privacy issues, but a commentary on the tendency of executives to wait too long to call attention to unethical behaviors. His protagonist doesn’t leave the company until it hurts him personally and, as Raganwald said of the piece, “this was one of the ideas I was trying to get across, that by shrugging and going along with stuff like this, we’re condoning and supporting it.”

Why is there so much silence and fear? Why are people afraid to call attention to bad actors, until they’ve after they’ve been burned, can be discredited with the “disgruntled” label, and it’s far too late? Is Goldman going to put a stop to bad practices because a resigning employee wrote an essay about bad behavior? Probably not. Are people going to quit their jobs at the bank when they realize that unethical behavior happens within it? Doubtful. Very few people leave jobs for ethical reasons until the misbehaviors affect them personally, at which point they are prone to ad hominem attacks: why did he wait to leave until it hurt him?

Human resources (HR) is supposed to handle everything from benign conflict to outright crimes, but the reality is that they work for a company’s lawyers, and that they’ll almost always side with management. HR, in most companies, is the slutty cheerleader who could upset the male dominance hierarchy if she wanted to, simply by refusing to date brutish men who harm others, but who would rather side with power. One should not count on them to right ethical wrongs in the workplace. They’re far too cozy with management.

Half the solution is obvious. As work becomes more technological, companies need technical talent. Slimy people (office politicians) end up steering most corporations, but we pedal. We can vote with our feet, if we have the information we need.

There’s been an American Spring over the past couple of months in which a number of people, largely in the financial and technological sectors where demand for technical talent is sufficiently high to make it less dangerous than it would usually be, have come out to expose injustices they’ve observed in the workplace. We need more of that. We need bad actors to be named and shamed. We need these sorts of things to happen before people get fired or quit in a huff. We need a corporate-independent HR process that actually works, instead of serving the interests of power and corporate risk-mitigation.

Here’s a concept to consider. We talk about disruption in technology, and this surely qualifies. This country needs a Work Court– an “unofficial” court in which employees can sue unethical managers and companies that defend or promote them. This Court won’t have the authority to collect judgments. It doesn’t need it. Awards will be paid out of advertising revenue, bad actors’ exposure (assuming they are found guilty) will be considered punishment enough, and it will be up to plaintiffs whether they judge it better for their reputations to be identified with the suit, or kept anonymous. What this provides is a niche for people to gain retribution for the low and mid-grade ethical violations that aren’t worth a full-fledged lawsuit (which can take years, be extremely costly, and ruin the plaintiff’s career). It also will remove some of the incentives that currently reward bad actors and keep them in power.

That, unlike another goofy “semantic coupon social-gaming” startup, would be an innovation I’d be interested in seeing happen.