Seeking co-founders. [April 23, 2013]

It has rained frogs before and, yes, it occasionally rains dragons. This is one of those times.

I have a startup concept. I think it’s legit. Here are some of its features:

  • validated business model. This is an improvement (using machine learning, user-supplied data, and an innovative market mechanic that has never existed before) on a billion-dollar industry. 
  • a “sexy” problem. I’m not getting into details, but if we pull this off, it’ll be known around the world, and hundreds of thousands of people will love (and some will pay lots of money for) the product.
  • a progressive outlook. One of my major gripes with most of “Social” is that it’s focused on documenting social graphs rather than expanding them and improving peoples’ lives. This is focused on hard-core expansion.
  • lots of open-source activity. For the goal of rapid prototyping, I’d like to build this thing in Clojure. We’ll be building up Clojure’s machine-learning infrastructure in a major way, and we’ll be putting a lot of our best stuff into the open-source world.
  • “blue ocean” (for now). Right now, there are no competitors. There will be, within time, so we need to get a head start.
  • pivot potential. One of the subproblems is to assess code quality (including documentation, hence NLP) automatically, and that’s something that could become an enterprise product if the riskier, many-moving-parts concept that I have in mind proves impractical.

Here are the downsides:

  • I have about 2 weeks to pull starting parts together. I’ve worked for some terrible startups that have wrecked my finances, so I don’t have the savings that a person of my age should. I can’t afford to burn savings, at all. I’m going to need a “10x hustler” (co-founder #1; see below) who can bring in month-to-month money right away. Failing that, we can’t do the project. I’m married and almost 30 and can no longer afford to pretend that “boring adult stuff”– like health insurance, the need to save for retirement, and saving for future child-raising expenses– don’t exist. I don’t expect to be matching the compensation level I’d be getting in finance (I regularly turn away $200k+ hedge fund jobs, and I wouldn’t draw that much out) but I can’t afford to be savings-negative.
  • We’ll be heavily reliant on third-party relationships. This is a problem with three main classes of actors. For two of them, there’s already strong signal that they would participate. The third class (who will provide our starting data) is more uncertain; it has two subclasses, one of which would be thrilled to participate but whose data will require serious NLP to mine, the other of which will have high-signal data but want to protect its interests. It’ll probably take 6 months to get that in order, and I’ll need someone with strong negotiation skills.
  • We’ll need front-end expertise. I’ve worked on back-end projects my whole life, but back-end is hard to sell. That comes down to front-end work. I can learn FE, and I will, but while I come up to speed on the presentation aspects, we’re going to need someone who’s already capable of building kickass demos.

Here are some traits of the company I want to build:

  • “Mid-growth”: after 25 people, no more than 40% headcount growth per year. I don’t want to be one of those horrible VC-istan companies that loses its culture in fast hiring. This isn’t a problem that requires a lot of “meat” but it does require a lot of smarts. 
  • Open allocation after 25 people. In our high-risk startup stage, we’ll make some autonomy compromises that are necessary to make the damn thing work. Laser focus. (We’ll be compromising our own autonomy, as its leaders, so it’s only fair…) However, once we’re de-risked, we simply won’t have jockeying for “plum projects” or “head count” squabbles as managers fight turf wars, because people will be able to transfer freely across the company, making projects compete for engineers rather than the reverse. No corrupt “transfer process” infrastructure that devolves into managerial extortion. I want a “stop-complaining-and-fix-the-damn-thing” culture where, instead of complaining to managers to get things done, people dive in and do it.
  • Outside of a traditional “tech hub”, with tolerance of distributed work. This is a longer-term project than the go-viral-or-die VC-istan nonsense. We’re going to need some very strong people, and we’ll need to pay them well, but we can’t afford to locate ourselves in a city where $150k is “a unit”. If this happens, I think we’ll probably set up in Austin, Madison, Portland, Pittsburgh or possibly Boston– unless business needs require us to be in SF or NYC. Of course, we’ll need to draw talent from everywhere and we’ll allow ourselves to be up to about 25% distributed.
  • Targets of 25+% female technologists, 40+% over-35, and 25+% with-children. Talent comes form all places, and we better be serious about making sure that all kinds of people are able to fit in: not just privileged, young white males like me. Forty-seven-year-old woman who happens to be a top-notch Lisper? Ready to kick ass? Then come in! I want this to be a company that I’d be happy to work for at age 60. This isn’t only because value diversity (although that’s true) but also because, if no women or older people want to work for you, it’s a sign that you have a fucked-up culture. We need canaries and we need to make sure they thrive.
  • Employees given the same respect of investors. Because they are. In fact, they’re investors of time and their careers which (in my view) accords them even more respect.
  • Profit-sharing over equity. Tiny equity allotments, typical in VC-istan, will be replaced with much larger profit-shares. My intention is to kick 50% of profits back to employees with no software engineer getting less than 1/(2*N) of that where N is the number of people.
  • Programmer is the highest title. We’ll have CxOs for external purposes as we market the company. Internally, the highest title will be Programmer. Not “Principal Software Developer” or “Chief Architect” or “Senior Staff Software Engineer XVI”. Top title (in pay, and respect) is Programmer (link NSFW; Zed Shaw rant).
  • Non-extortive workplace. Teaching and inspiring others will be the only method of management. To the extent that we will may need “people management”, we will never accord them with unilateral termination or credibility-reduction powers. If they are not helpful to the managed, they go. Employees work for the company, not their specific “bosses”, and have the right to change managers or projects as needed.
  • No “performance” reviews. The language of “performance” reviews is fucking insulting and I won’t tolerate it in the company we build. We’ll have impact reviews in which we assess how well an employee’s work is being integrated with the rest of the company. Unless bad-faith non-performance is obvious, low-impact is our fault, not the employee’s, and we will work together to fix that.

Okay, so here are two co-founders I will absolutely need.

Co-founder #1: a business co-founder.

You respect and understand technology (but enough to leave it alone) and you know how to sell it. We will be equals but, to the outside world, you’ll be the CEO. You have the contacts and resources to keep this idea funded and make sure it is coherent with the market. You will have authority to change business strategy. If you tell me that we need to “pivot” because the market is not interested in what we are offering, I will trust your judgment and make it happen. This will give you a great deal of authority over what is now “my idea” but, once we are a team it is no longer my idea. You’ll be responsible for driving what we do to market coherence (i.e. keeping us paid and profitable).

You will raise money and public interest in the product, but your ultimate goal must be to bring us to profitability as quickly as we can there (so we can protect ourselves from aggressive investors who might threaten the culture). If we cannot raise capital through typical means, then you will find consulting clients, for me and other programmers we employ, sufficient to fund the firm until we are profitable. (I’m a top-notch functional programmer with machine learning experience; it shouldn’t be hard for you to do that. You’d have to really fucking suck at sales not to be able to sell me, and when we start, all the programmers we hire are going to be roughly as good as I am, if not better.)

Remember: we have about two weeks (starting today) to decide if this can happen. If not, I get a day job and (because I’m sick of “job hopping”) I am going to make a real effort to stay there for 4+ years, meaning that this window closes.

Co-founder #2: top-notch front-end programmer and designer.

You’re a top-notch front-end programmer, fluent in technologies like JavaScript/ClojureScript and HTML5. You’re a designer and teacher who doesn’t consider his or her work done until it’s easy to use. We’ll probably spend a large percentage of the first 6 months teaching each other so that I have a passing competency at what you do, and vice versa. Ultimately, we’re both going to be full-stack. But for the short-term, I need someone who’s already “plug-and-play” when it comes to front-end design work. We will be on terms of equality but your title (for external reference) will be Chief Design Officer.

Co-founder #3: that’s me. 

Who in the eff am I? In 2003, I invented a kick-ass card game called Ambition. In 2004, I earned an Honorable Mention (58 out of 120 points; about 65th place nationally) on the Putnam exam. I was a data scientist “before it was cool” and have been using functional programming languages and working on machine learning projects since about 2006. I’ve used Clojure, which is what we’ll be building our back-end in, off and on since 2008, when it was brand-spanking-new. I run a blog (this one) that gets about 2,000 hits per day, with almost all of that in the technology community.

I will work on the back-end programming, the machine learning research, and also the cultural infrastructure that will make us one of the best companies in the world circa 2018. For external purposes, I will Chief Technology Officer and Cultural Director. However, I intend to cede the CTO role (remember that titles are only of external importance) within time and take a Chief Research Officer role.

Okay, let’s go.

If you want to talk, now’s the time. Email me at michael.o.church at Google’s email service, and we’ll talk. Please tell me which of the two roles above you believe you’d be able to fill, and I’ll get back to you within a couple of hours, or the next morning if you write at night. I’m also generally available for phone conversations between 6:30 and 22:00 EDT.

The shodan programmer

The belt-color meritocracy

Nothing under the sun is greater than education. By educating one person and sending him into the society of his generation, we make a contribution extending a hundred generations to come.” — Dr. Kano Jigoro, founder of judo.

Colored belts, in martial arts, are a relatively modern tradition, having begun in the late 19th century. It started informally, with the practice in which the teacher (senseiwould wear a black sash in contrast against the white uniform (gi) in order to identify himself. This was later formalized by Dr. Kano with a series of ranks, and by replacing the black sash (in addition to a white belt, holding the gi together) with a black belt. Beginners were assigned descending kyu ranks (traditionally, 6th to 1st) while advanced ranks were dan (from 1st up to 10th). At a dan rank, you earned the right to wear a black belt that would identify you, anywhere in the world, as a qualified teacher of the art. Contrary to popular opinion, a black belt doesn’t mean that you’ve fully mastered the sport. Shodan is taken, roughly, to mean “beginning master”. It means that, after years of work and training, you’ve arrived. There’s still a lot left to learn.

Over time, intermediate belt colors between white and black were introduced. Brown belts began to signify nearness to black-belt level mastery, and green belts signified strong progress. Over time, an upper-division white belt began to be recognized with a yellow belt, while upper-division green belts were recognized with blue or purple. While it’s far from standard, there seems to be a general understanding of belt colors, approximately, as following:

  • White: beginner.
  • Yellow: beginner, upper division.
  • Green: intermediate.
  • Purple: intermediate, upper division.
  • Brown: advanced. Qualified to be senpai, roughly translated as “highly senior student”.
  • Black: expert. Qualified to be sensei, or teacher.

Are these colored belts, and ranks, good for martial arts? There’s a lot of debate about them. Please note that martial arts are truly considered to be arts, in which knowledge and perfection of practice (rather than mere superiority of force) are core values. An 8th dan in judo doesn’t mean you’re the most vicious fighter out there (since you’re usually in your 60s when you get it; you are, while still formidable, probably not winning Olympic competitions) because that’s not the point. These belts qualify you as a teacher, not a fighter only. At that level, knowledge, dedication and service to the community are the guidelines of promotion.

Now, back to our regularly scheduled programming (pun intended)

Would colored belts (perhaps as a pure abstraction) make sense for programming? The idea seems nutty. How could we possibly define a rank system for ourselves as software engineers? I don’t know. I consider myself a 1.8-ish ikkyu (1-kyu; brown belt) at my current level of programmer development. At a typical pace, it takes 4-6 years to go from 1.8 to 2.0 (shodan); I’d like to do it in the next two or three. But we’ll see. Is there a scalable and universally applicable metric for programmer expertise assessment? I don’t know. 

To recap the 0.0-to-3.0 scale that I developed for assessing programmers, let me state the most important points:

  • Level 1 represents additive contributions that produce some front-line business value, while level-2 contributions are multiplicative and infrastructural. Level-3 contributions are global multipliers, or multiplicative over multipliers. Lisps, for example, are languages designed to gift the “mere” programmer with full access to multiplicative power. The Lispier languages are radically powerful, to the point that corporate managers dread them. Level-2 programmers love Lisps and languages like Haskell, however; and level-3 programmers create them.
  • X.0 represents 95% competence (the corporate standard for “manager doesn’t need to worry”) at level X. In other words, a 1.0 programmer will be able to complete 95% of additive tasks laid before him. The going assumption is that reliability is a logistic “S-curve” where a person’s 5% competent on tasks 1.0 levels higher, 50% at 0.5 above, and 95% at-level. So a 1.8 engineer like me is going to be about 85% competent at Level-2 work, meaning that I’d probably do a good job overall but you’d want light supervision (design review, stability analysis) if you were betting a company on my work.
  • 1.0 is the threshold for typical corporate employability, and 2.0 is what we call a “10x programmer”; but the truth is that the actual difference in value creation is highly variable: 20x to 100x on green-field development, 3x to 5x in an accommodating corporate environment such as Google, and almost no gain in a less accommodating one.
  • About 62% of self-described professional software engineers are above 1.0. Only about 1 percent exceed 2.0, which typically requires 10-20 years of high-quality experience. The median is only 1.1, and 1.4 is the 85th percentile.
  • At least in part, the limiting factor that keeps most software engineers mediocre is the extreme rarity of high-quality work experience. Engineers between 1.5 and 1.9 are manager-equivalent in terms of their potential for impact, and 2.0+ are executive-equivalent (they can make or break a company). Unfortunately, our tendency toward multiplicative contribution leads us into direct conflict with “real” managers, who consider multiplicative effects their “turf”.

Programming– like a martial art or the board game Go, both being uncommonly introspective on the measurement of skill ad progress– is a field in which there’s a vast spectrum of skill. 2.0 is a clear candidate for shodan (1st dan). What does shodan mean? It means you’re excellent, and a beginner. You’re a beginner at being excellent. You’re now also, typically, a teacher, but that doesn’t mean you stop learning. In fact, while you can’t formally admit to this too often (lest they get cocky) you often learn as much from your students as they do from you. Multiplicative (level 2) programming contributions are fundamentally about teaching. When you build a Lisp macro or DSL that teaches people how to think properly about (and therefore solve) a problem, you are a teacher. If you don’t see it this way, you just don’t get the point of programming. It’s about instructing computers while teaching humans how the systems work.

In fact, I think there is a rough correlation between the 0.0 to 3.0 programmer competence scale and appropriate dan/kyu ranks, like so:

  • 0.0 to 0.4: 8th kyu. Just getting started. Still needs help over minor compilation errors. Can’t do much without supervision.
  • 0.5 to 0.7: 7th kyu. Understands the fundamental ideas behind programming, but still takes a lot of time to implement them.
  • 0.8 to 0.9: 6th kyu. Reaching “professional-grade” competence but only viable in very junior roles with supervision. Typical for an average CS graduate.
  • 1.0 to 1.1: 5th kyu. Genuine “white belt”. Starting to understand engineering rather than programming alone. Knows about production stability, maintenance, and code quality concerns. Can write 500+ line programs without supervision.
  • 1.2 to 1.3: 4th kyu. Solidly good at additive programming tasks, and can learn whatever is needed to do most jobs, but not yet showing leadership or design sense. Capable but rarely efficient without superior leadership.
  • 1.4 to 1.5: 3rd kyu. Developing a mature understanding of computer science, aesthetics, programming and engineering concerns, and the trade-offs involved in each. May or may not have come into functional programming (whose superiority depends on the domain; it is not, in high-performance domains, yet practical) but has a nuanced opinion on when it is appropriate and when not.
  • 1.6 to 1.7: 2nd kyu. Shows consistent technical leadership. Given light supervision and permission to fail, can make multiplier-level contributions of high quality. An asset to pretty much any engineering organization, except for those that inhibit excellence (e.g. corporate rank cultures that enforce subordinacy and disempower engineers by design).
  • 1.8 to 1.9: 1st kyu. Eminently capable. Spends most of his time on multiplier-type contributions and performs them well. Can be given a role equivalent to VP/Engineering in impact and will do it well.
  • 2.0 to 2.1: 1st dan. She is consistently building high-quality assets and teaching others how to use them. These are transformative software engineers who don’t only make other engineers more productive (simple multiplierism) but actually make them better. Hire one, give her autonomy, and she will “10x” your whole company. Can be given a CTO-equivalent role.
  • 2.2 to 2.3+: Higher dan ranks. Having not attained them, I can’t accurately describe them. I would estimate Rich Hickey as being at least a 2.6 for Clojure, as he built one of the best language communities out there, creating a beautiful language on top of an ugly but important/powerful ecosystem (Java), and for the shockingly high code quality of the product. (If you look into the guts of Clojure, you will forget to hate Java. That’s how good the code is!) However, I’m too far away from these levels (as of now) to have a clear vision of how to define them or what they look like.

Is formal recognition of programmer achievement through formalized ranks and colored belts necessary? Is it a good idea? Should we build up the infrastructure that can genuinely assess whether someone’s a “green belt engineer”, and direct that person toward purple, brown, and black? I used to think that this was a bad idea. Why? Well, to be blunt about it, I fucking hate the shit out of resume culture, and the reason I fucking hate it is that it’s an attempt to collate job titles, prestige of institutions, recommendations from credible people. and dates of employment into a distributed workplace social status that simply has no fucking right to exist. Personally, I don’t lie on my resume. While I have the career of a 26-year-old at almost 30 (thanks to panic disorder, bad startup choices, and a downright evil manager when I was at Google) I feel like I still have more to lose by lying than to gain. So I don’t. But I have no moral qualms about subverting that system and I encourage other people, in dire circumstances, to engage in “creative career repair” without hesitance. Now, job fraud (feigning a competency one does not have) is unacceptable, unethical, and generally considered to be illegal (it is fraud). That’s different, and it’s not what I’m talking about. Social status inflation, such as “playing with dates” to conceal unemployment, or improving a title, or even having a peer pose as manager during a reference check? Fair game, bitches. I basically consider the prestige-title-references-and-dates attempt to create a distributed workplace social status to be morally wrong, extortionate (insofar as it gives the manager to continue to fuck up a subordinate’s life even after they separate) and just plain fucking evil. Subverting it, diluting its credibility, and outright counterfeit in the effort to destroy it; all of these are, for lack of a better word, fucking awesome.

So I am very cynical about anything that might be used to create a distributed social status, because the idea just disgusts me on a visceral level. Ranking programmers (which is inherently subjective, no matter how good we are at the assessment) seems wrong to me. I have a natural aversion to the concept. I also just don’t want to do the work. I’d rather learn to program at a 2.0+ level, and then go off and do it, then spend years trying to figure out how to assess individuals in a scalable and fair way. Yeah, there might be a machine learning problem in there that I could enjoy; but ultimately, the hero who solves that problem is going to be focused mostly on people stuff. Yet, I am starting to think that there is no other alternative than to create an organization-independent ranking system for software engineers. Why? If we don’t rank ourselves in a smart way, then business assholes will step in and rank us anyway, and they’ll do a far shittier job of it. We know this to be true. We can’t deny it. We see it in corporate jobs on a daily basis.

A typical businessman can’t tell the difference between a 2.0 engineer and a 1.2 who’s great at selling his ideas. We tend to be angry at managers over this fact, and over the matter of what is supposed to be a meritocracy (the software industry) being one of the most politicized professional environments on earth; but when we denigrate them for their inability to understand what we do, we’re the ones being assholes. They police and measure us because we can’t police and measure ourselves.

So this may be a problem that we just need to solve. How does one get a black belt in programming? Most professional accreditations are based on churning out commodity professionals. We can’t take that approach, because under the best conditions it takes a decade to become a black belt/2.0+, and some people don’t even have the talent. This is a very hard problem, and I’m going to punt on it for now.

Brawlers and Expert Experts

Let’s peer, for a little while, into why Corporate Programming sucks so much. As far as I’m concerned, there are two categories of degeneracy that merit special attention: Brawlers and Expert Experts.

First I will focus on the Brawlers (also known as “rock stars” or “ninjas”). They write hideous code, and they brag about their long hours and their ability to program fast. There’s no art in what they do. They have only a superficial comprehension of the craft. They can’t be bothered to teach others what they are doing, and don’t have enough insight that would make them passable at it anyway. What they bring is a superhuman dedication to showing off, slogging through painful tasks, and kludging their way to something that works just enough to support a demo. They have no patience for the martial art of programming, and fight using brute strength.

Brawlers tend, in fact, to be a cut above the typical “5:01″ corporate programmers. Combine that with their evident will to be alpha males and you get something that looks like a great programmer to the stereotypical business douche. Brawlers tend to burn themselves out by 30, they’re almost always men, and they share the “deadlines is deadlines” mentality of over-eager 22-year-old investment banking “analysts”. There is no art in what they do, and what they build is brittle, but they can do it fast and they’re impressive to people who don’t understand programming.

Let’s think of corporate competition as a fight that lasts for five seconds, because power destroys a person’s attention span and most executives are like toddlers in that regard. In a three-minute fight, the judoka would defeat the brawler; but, in a 5-second fight, the brawler just looks more impressive. He’s going all out, posturing and spitting and throwing feint punches while the judoka seems passive and conservative with his energy (because he is conserving it, until the brawler makes a mistake, which won’t take long). A good brawler can demolish an untrained fighter in 5 seconds, but the judoka will hold his own for much longer, and the brawler will tire out.

With the beanbag coming in after 5 seconds, no one really lands a blow, as the judoka has avoided getting hit but the brawler hasn’t given enough of an opening for the judoka to execute a throw. Without a conclusive win or loss, victory is assessed by the people in chairs. However, the judges (businessmen, not programmers) don’t have a clue what the fuck they just watched, so they award the match to the brawler who “threw some really good punches” even though he failed to connect and would have been thrown to the ground had the fight lasted 5 seconds more.

Where are Brawlers on the engineer competence scale? It’s hard to say. In terms of exposure and knowledge they can be higher, but they tend to put so much of their energy and time into fights for dominance that the quality of their work is quite low: 1.0 at best. In terms of impressions, though, they seem to be “smart and gets things done” to their superiors. Managers tend to like Brawlers because of their brute-force dedication and unyielding willingness to shift blame, take credit, and kiss ass. Ultimately, the Brawler is the one who no longer wishes to be a programmer and wants to become more like an old-style “do as I say” manager who uses intimidation and extortion to get what he wants.

Brawlers are a real problem in VC-istan. If you don’t have a genuine 1.5+ engineer running your technical organization, they will often end up with all the power. The good news about these bastards (Brawlers) is that they burn themselves out. Unless they can rapidly cross the Effort Thermocline (the point at which jobs become easier and less accountable with increasing rank) by age 30, they lose the ability to put a coherent sentence together, and they just aren’t as good at fighting as they were in their prime.

The second category of toxicity is more long-lived. These are the people called Expert Beginners by Erik Dietrich, but I prefer to call them Expert Experts (“beginner” has too many positive and virtuous connotations, if one either takes a Zen approach, or notes that shodan means “beginner”). No, they’re not actual experts on anything aside from the social role of being an Expert. That’s part of the problem. Mediocrity wants to be something– an expert, a manager, a credible person. Excellence wants to do things– to create, to build, and to improve running operations.

The colored-belt metaphor doesn’t apply well to Brawlers, because even a 1.1 white belt could defeat a Brawler (in terms of doing superior work) were it not for the incompetence of the judges (non-technical businessmen) and the short duration of the fight. That’s more of an issue of attitude than capability, I’ve met some VC-istani Brawlers who would be capable of programming at a 1.4 level if they had the patience and actually cared about the quality of their work. It’s unclear what belt color applies; what is more clear is that they take their belts off because they don’t care.

Expert Experts, however, have a distinct level of competence that they reach, and rarely surpass, and it’s right around the 1.2 level: good enough to retain employment in software, not yet good enough to jeopardize it. They’re career yellow belts at 1.2-1.3. See, the 1.4-1.5 green belts have started exposing themselves to hard-to-master concepts like functional programming, concurrency and parallelism, code maintainability, and machine learning. These are hard; you can be 2.0+ and you’ll still have to do a lot of work to get any good at them. So, the green belts and higher tend to know how little they know. White belts similarly know that they’re beginners, but corporate programming tends to create an environment where yellow belts can perceive themselves to be masters of the craft.

Of course, there’s nothing wrong with being a yellow belt. I was a novice, then a white belt, then yellow and then green, at some point. (I hadn’t invented this metaphor yet, but you know what I mean.) The problem is when people get that yellow belt and assume they’re done. They start calling themselves expert early on and stop learning or questioning themselves; so after a 20-year career, have 18 years of experience in Being Experts! Worse yet, career yellow belts are so resistant to change that they never get new yellow belts, and time is not flattering to bright colors, so their belts tend to get a bit worn and dirty. Soot accumulates and they mistake it (as their non-technical superiors do, too) as a merit badge. “See! It’s dark-gray in spots! This must be what people mean when they talk about black belts!”

There’s a certain environment that fosters Expert Experts. People tend toward polarization of opinion surrounding IDEs but the truth is that they’re just tools. IDEs don’t kill code; people kill code. The evil is Corporate Programming. It’s not Java or .NET, but what I once called “Java Shop Politics“, and if I were to write essay now, I’d call it something else, since the evil is large, monolithic software and not a specific programming language. Effectively, it’s what happens when managers get together and decide that (a) programmers can’t be trusted with multiplicative work, so the goal becomes to build a corporate environment tailored toward mediocre adders (1.0 to 1.3) but with no use for superior skill, and (b) because there’s no use for 1.4+, green-belt and higher, levels of competence, it is useless to train people up to it; in fact, those who show it risk rejection because they are foreign. (Corporate environments don’t intentionally reject 1.4+ engineers, of course, but those tend to be the first targets of Brawlers.) It becomes a world in which software projects are large and staffed by gigantic teams of mediocre developers taking direct orders with low autonomy. It generates sloppy spaghetti code that would be unaffordable in its time cost were it not for the fact that no one is expected, by that point, to get anything done anyway.

Ultimately, someone still has to make architectural decisions, and that’s where the Expert Experts come in. The typical corporate environment is so stifling that 1.4+ engineers leave before they can accumulate the credibility and seniority that would enable them to make decisions. This leaves the Expert Experts to reign over the white-belted novices. “See this yellow belt? This means that I am the architect! I’ve got brown-gray ketchup stains on this thing that are older than you!”

Connecting the Dots

It goes without saying that there are very few shodan-level programmers. I’d be surprised if there are more than 15,000 of them in the United States. Why? What makes advancement to that level so rare? Don’t get me wrong: it takes a lot of talent, but it doesn’t take so much talent as to exclude 99.995% of the population. Partly, it’s the scarcity of high-quality work. In our War on Stupid against the mediocrity of corporate programming, we often find that Stupid has taken a lot of territory. When Stupid wins, multiplicative engineering contributions become impossible, which means that everyone is siloized into get-it-done commodity work explicitly blessed by management, and everything else gets thrown out.

Brawlers, in their own toxic way, rebel against this mediocrity, because they recognize it as a losing arrangement they don’t want; if they continue as average programmers in such an environment, they’ll have mediocre compensation and social status. They want to be alpha males. (They’re almost always men.) Unfortunately, they combat it by taking an approach that involves externalized costs that are catastrophic in the long term. Yes, they work 90 hours per week and generate lots of code, and they quickly convince their bosses that they’re “indispensable”. Superficially, they seem to be outperforming their rivals– even the 1.4+ engineers who are taking their time to do things right.

Unfortunately, Brawlers tend to be the best programmers when it comes to corporate competition, even though their work is shitty. They’re usually promoted away from the externalized costs induced by their own sloppy practices could catch up with them. Over time, they get more and more architectural and multiplier-level responsibilities (at which they fail) and, at some point, they make the leap into real management, about which they complain-brag (“I don’t get to write any code anymore; I’m always in meetings with investors!”) while they secretly prefer it that way. The nice thing, for these sociopaths, about technology’s opacity in quality is that it brings the Effort Thermocline to be quite low in the people-management tier.

Managers in a large company, however, end up dealing with the legacy of the Brawlers and, even though blame has been shifted away from those who deserve it, they get a sense that engineers have “too much freedom”. It’s not sloppy practices that damaged the infrastructure; it’s engineer freedom in the abstract that did it. Alien technologies (often superior to corporate best practices) often get smeared, and so do branch offices. “The Boston office just had to go and fucking use Clojure. Does that even have IDE support?”

This is where Expert Experts come in. Unlike Brawlers, they aren’t inherently contemptible people– most Expert Experts are good people weakened by corporate mediocrity– but they’re expert at being mediocre. They’ve been yellow belts for decades and just know that green-belt levels of achievement aren’t possible. They’re professional naysayers. They’re actually pretty effective at defusing Brawlers, and that’s the scary bit. Their principled mediocrity and obstructionism (“I am the expert here, and I say it can’t be done”) actually serves a purpose!

Both Brawlers and Expert Experts are an attempt at managerial arrogation over a field (computer programming) that is utterly opaque to non-technical managers. Brawlers are the tough-culture variety who attempt to establish themselves as top performers by externalizing costs to the future and “the maintenance team” (which they intend never to be on). Expert Experts are their rank-culture counterparts who dress their mediocrity and lack of curiosity up as principled risk-aversion. So, we now understand how they differ and what their connection is.

Solve It!

I did not intend to do so when I started this essay, in which I only wanted to focus on programming, but I’ve actually come upon (at least) a better name for the solution to the MacLeod Organizational Problem: shodan culture. It involves the best of the guild and self-executive cultures. Soon, I’ll get to exactly what that means, and how it should work.

Status checks and the stink-when-breaking problem

Most software engineers, being rational people and averse to legacy, hate management– a set of legacy processes that were necessary over the concave, commodity work that existed for 200 years, but counterproductive over the convex stuff we do as software engineers. No, we don’t hate managers. They’re just people. They have this job we dislike because it seems to require them to get in our way. We only covet their jobs insofar as we wish we had more control over tools and the division of labor– we have this impractical dream of being managers-who-code (i.e. exceptionalism) meaning that we’re full-time programmers who have managerial control, even though the real world doesn’t work that way and most actual managers find their time spent mostly in meetings– but mostly, we think what they do is unnecessary. That’s not 100% true. It’s about 97% true. Resolving personnel issues, mentoring new hires, building teams and looking out for the health of the group– all of these are important tasks. The “do what I say or I’ll fire you” extortionist management that exists when bosses prioritize “managing up” and stop giving a shit about genuine leadership and motivation is the problem. I’ve said enough on that to make my opinions well-known.

A typical inefficiency that we perceive, as engineers, is the fact that most of the work that “needs to be done” doesn’t. It’s fourth-quadrant work that exists only because people lack the ability to refuse it. We’re rationalists and careerists, so we doubly hate this. We realize that it’s inefficient, but, more to our dislike, that it’s individually bad for us to work on low-yield projects. Why is there so much nonsensical dues-paying? And why does Corporate America feel more like the Stanford Prison Experiment than a group of people working together to achieve a common goal? I believe I can answer that. We need to understand what “business” is to non-rational people. It’s emotional. Judgments are made quickly, then back-rationalized. Decisions are made without data, or on sparse and poorly audited data, which is why an error in an Excel model can be such a big deal. So let me get to the one question that non-technical “businessmen” ask about people when evaluating them. It is…

Do they stink when they break?

Some people drop to 0.9 (that is, they lose 10% of peak productivity) under managerial adversity. (A few might go up, with a 1-in-1-million effective-asshole manager like Steve Jobs. So rare I’m ignoring it.) Some drop to zero. Some go to -10 as they cause morale problems. Managers want to see what happens ahead of time so they can promote the first category, mark the second as non-advancers, and fire the third. People are sorted, then, based on the rightward side of their decline curve. The reason this is fucked-up and morally wrong is that, if management is at all decent, that extreme territory is never seen. So a lot of what management tries to do is probe people for those tendencies in little ways that don’t disrupt operations.

Rationalists like engineers don’t like this. We look at the affair and say, “Why the fuck are you trying to break people in the first place? That’s fucking abusive and wrong.” We try to build systems and processes that fail harmlessly, do their jobs as well as possible, and certainly never break people. We like things that “just work”. Some of us are glory whores when it comes to macroscopics and “launches”, and we have to be that way for career reasons but, on the whole, the best endorsement of a product is for people to use it and not even know they are using it, because it works so damn well. When we look for weak points in a system, we work for technical issues and try to solve those. Non-rationalists don’t think that way. They look for weak people and, when there are none, they make one up to justify their continued high position (and as an easy blame eater).

When you deal with a non-rationalist, the first think you need to know about him is that he’s trying to figure out how bad it smells when you break. If you’re loaded up with 80 hours per week of undesirable work for two months on end, will you bear it? Or will you quit? Underperform? Make mistakes? Whine and cause morale problems? Grind to a halt, get fired, and demand severance not to sue? Sabotage systems? Non-rationalists want to find peoples’ failure modes; engineers want to build systems that are tolerant of human failure.

I think that this stink-when-breaking matter is one of the fundamentals of negotiation, as well. When you stink, you lose all negotiatory power. You whine, you make messes, you get people pissed off, and you end up in a position of weakness. You prove that you’ve been broken. You might have worse kinds of stink up your sleeve, but you’ve put your managerial adversary into damage-control mode and lost your element of surprise. The best way to extract value from a non-rationalist is, instead, to leave him not knowing whether you’ve broken yet, which means you have to minimize your smell (of defeat, resentment, hatred, or adversity). Are you intact, or is your stink very mild? You just can’t stink, if you want to negotiate from a position of strength, no matter how badly you’ve been broken. Breaks will heal; stink is permanent.

Okay, we now have an understanding that allows us to understand the non-rationalists that we either answer to, or answer to people who answer to. That’s a start. Now I’m going to talk about the bane of programmers: impromptu status checks. I’m not talking about necessary communication, as would occur during a production crisis, nor about planned standups (which are painful when poorly run, but at least scheduled). I’m talking about the unplanned and involuntary games of 52-Card Pickup that managers inflict on us when they’re bored. Mention status pings to a software engineer, and it’s like you just uttered a racial slur. We fucking hate hate hate hate hate those with a +8 modifier to Anger. I call it “email syndrome”. Email can be checked 74 times per day with no degradation to performance. Making the same assumption about a person is a terrible idea. For an engineer, one impromptu status ping per day is an order of magnitude too much (during normal circumstances). If you really need daily status information (and you shouldn’t; you should trust people to do their jobs) then run a scheduled standup for which people can prepare. It’s just disrespectful to put people on trial, constantly, without their having the right to prepare.

I used to take a Hanlon-esque approach to status pings. (Hanlon’s Razor is “never attribute to malice what can be explained by benign incompetence.”) Now, my attitude is more cynical. Why? Because the first thing you learn in Technology Management 101 is that engineers can’t fucking stand impromptu status pings. Our feeling on this is simple: if you really don’t trust us with hours and days of our own fucking time, then don’t hire us. It’s that simple and, not only that, but most technology managers know that programmers hate these utterly pointless status checks. If we wanted to be interrupted to feed babies, we’d have our own.

So what’s the real point of an impromptu status check? You know that Carlin-esque phenomenon where you look at your watch but don’t read it, then have to look again because you didn’t catch what time it was? That’s exactly how managers’ impromptu status checks work. It’s not actually about project status. No one has any clue how long things “should” take anyway; on that, the world is split between people who know they don’t know and people who don’t know that they don’t know. It’s about making a manager feel secure. You have two jobs. One is to make the manager feel important; the whole reason he is interrupting you is because to establish that he can, and your job is that of an actor. The other is to make him feel like you are not slacking, and that aspect of it (lack of trust) is goddamn insulting. Again, I’m not talking about a production crisis that might actually merit such a fast audit-cycle. I’m talking about 2-20 status pings per week during peace time. It’s painful and wrong and the context-switches drop productivity to zero.

Why don’t managers intuitively understand this? I’m going to paint management in the best possible light. They’re salesmen. That’s not a pejorative. Far from it. That’s a valuable function, especially because while trust sparsity is horrible as a matter of policy, it tends to be somewhat inevitable, and salesmen are the best people at breaking down trust sparsity. They sell their superiors on their ability to deliver a project. They (if they’re good) sell their reports on the value of the project to their careers. All day, they’re selling someone on something. So they’re in Sales Mode constantly, and that is their flow. For a manager, a status ping is a welcome break: a chance to have someone else do the selling (i.e. prove that he hasn’t wasted the previous 3.37 hours of working time) to them. For the programmer, though, it’s a sudden and unexpected jerk into Sales Mode, which is a different state of consciousness from Building Mode. Both kinds of flow have their charms and can be a lot of fun, but mixing the two is both ineffective and miserable, especially when not by one’s choice.

If these status pings are so horrible, then why are there so many programming environments in which they’re common? Well, most managers are non-rationalists, and non-rationalists are all about emotion, perception, and superficial loyalty. Again, remember that question, and that test: does this guy stink when he breaks? Is his attitude (that he’s feeding an impulsive toddler) evident? Does his face show a look of enmity or defeat? Does he make eye contact with the boss or with the computer screen? What’s his posture? What’s his tone of voice?

Of course, it’s these status checks have negative value as an assessment of project progress, because the signal is going to be opposite of the truth. Employees pass these impromptu status checks when they’re prepared, and that means they aren’t doing any real work. They fail (because they’re pissed off about the interruption) when they’re actually getting something accomplished. That, however, is not what they’re really about. The manager will forget everything you told him, 15 minutes later. (That’s part of why status checks are so fucking infuriating; they’re about social status, not project progress.) They’re about not stinking when you break. It doesn’t matter what you got done in the past few hours, but you need to show an ability to communicate progress without getting pissed off about the indignity of a tight audit cycle and hope, for fuck’s sake, that either he’ll start to trust you more or that you’ll be able to get a different manager.

What’s the solution? Well, there are two kinds of status checks. The first kind is the simple kind that doesn’t involve follow-on questions. I think the solution is obvious. Write down a sentence about what you’ve accomplished, every 30 minutes or so. Maybe this could be integrated with the Pomodoro technique. Now you have a story that can be read by rote and don’t have to you don’t have to deal with the social and emotional overhead of a sudden switch into Sales Mode. You’re prepared. That will work as long as there aren’t follow-on questions; if there are, then it’s the second kind of status check.The second kind is the cavity search that comes from a manager who just has nothing else to do. I think the only solution in that case (unless it’s rare) is to change companies.

Why we must shut down the Corporate System

I’ve come to the conclusion that the Corporate System deserves to be shut down. What is the Corporate System? No, it’s not the same thing as “corporations”. Corporations are just legal entities. In fact, they’re good things. Limited liability in good-faith business failure is, if not quite a right, a privilege of extreme value to society. The Corporate System is the authority structure that grew up within it: the internal credibility markets of businesses, the attempt to create distributed social status through resumes and references, and all the other nonsense designed to scare the hell out of people so they compliantly execute the ideas handed to them from on high. Most people experience it as managerial authority while society, as a whole, gets the butt-end of externalized costs (e.g. pollution). These might not seem connected, but they are intimately linked. Externalized costs are permitted to exist in a world where people are terrified of long-term career ramifications at the hands of vindictive executives. 

It’s not business or capitalism that we need to shut down. Far from it. As a syncretist at heart, I don’t believe that we can build a decent society while taking innovations only from Column A (capitalism) or Column B (socialism). We need both. Nor do we need to get rid of corporations as the legal abstraction. Again, limited liability under good-faith business failure is one of the best ideas this society has come up with. Rather, the enemy is the network of extortions, lies maintained due to careerist fear, and social exclusion that enables a small set of well-connected parasites to rob investors and employees both, as well as society at large via externalized costs, at an obscene scale.

At any rate, outlawing capitalistic activity (in addition to being morally wrong) just doesn’t work. The problem with capitalism and communism both is that, when rigidly enforced on people, they tend to generate a shitty version of the other. Corporate capitalism is, for the 99%, the worst of both systems. Look at air travel: capitalistic price volatility, but Soviet-style service. For the well-connected 1%, it’s the best of both systems. What we really need is a hybrid system dedicated toward providing the best of each for everyone, but I’ve gone on for megawords about that.

Still, the Corporate System is an expensive, inefficient, authoritarian, self-contradicting dinosaur. It’s time to kill it. Let me establish why we should tear it down. We ask one simple question: why do we allow private business to exist?

Answers are:

  1. People have a right to trade services and resources with the market, so long as they aren’t hurting others by doing so. I think this is a pretty straightforward moral claim. If I can do something that confers only benefit to the stakeholders, I shouldn’t have to ask anyone for permission.
  2. Central authorities can’t reliably outguess markets. Pricing and exchange rates are an NP-complete computational problem in theory and, in practice, even harder because the information informing fair levels is (a) scarce, and (b) constantly changing. Markets aren’t perfect, but they have a much greater information surface area than a central bureaucracy.
  3. If people are deprived of the right to interact with the market independently, the government owns them. A political authority that outlaws private business is making itself The Boss, and making fair competition impossible. (Competition will exist, but involving personal risk, because it’s illegal.)

Private business has a lot of problems. It generates economic inequality. It tends toward organizational sociopathy (although I believe that problem can be solved, by taking a K-selective approach to process and value reproduction where quality trumps rapid growth). Some regulatory oversight is required. It can’t solve all of our economic problems; we need a socialist infrastructure that protects the unlucky. Yet, for all its flaws, the capitalistic market economy is still a wonderful, necessary thing. (I say this as a 3-sigma leftist, because it’s true.) It funds creation because governments can’t. It’s self-correcting. In many ways, it works– although not perfectly.

The Corporate System, however, is something else. It provides none of the benefits of private business that mandate our acceptance of it. Rather, it’s an attempt to build a corrupt blatnoy bureaucracy within capitalism. It occurs when the entrenched grow tired and no longer want to live in a world where ambitious up-and-comers can compete with them. It’s not entrepreneurial. In fact, it’s the deployment of managerial authority (within a company or, in the VC-istan case, via the reputation economy of the most credible investors) to shut down true innovation.

This is the fundamental reason why Corporate Authority is such a pile of self-contradicting, hypocritical dogshit. Government authority (e.g. to enforce speed limits) is just necessary in small doses. There are problems that require authority to solve it. Knowing the abuses that occur when such authority is unaccountable, we demand the right to fire the elected representatives at the top of that structure. However, capitalism, properly practiced, is not about authority. They don’t belong in the same room together. Rather, we allow capitalism to exist (in spite of its flaws, such as inequality) because we know authority is inadequate to solve all problems, and has no right to go into most spheres of human endeavor. Capitalism, done right, is about the removal of authority.

This is why the interfering, self-serving managerial authority that makes Corporate America such hell deserves to end. Now. It’s bad for us and it’s bad for capitalism. We must make it retreat in disgrace.

The Disentitled Generation

Anyone else up for some real rage? I can’t promise that there won’t be profanity in this post. In fact, I promise that there will be, and that it will be awesome. Let’s go.

People don’t usually talk about these things that I talk about, for fear that The Man will tear their fucking faces off if they tell the truth about previous companies and how corporate office really run themselves, but I am fucking sick of living in fear. One can tell that I have an insubordinate streak. It’s a shame, because I am extremely good at every other fucking thing the workplace cares about except subordination; but that’s one thing I never got down, and while it’s more important (in the office context) than any other social skill, I’m too old to learn it.

Let’s talk about the reputation that my generation, the Millennials (born ca. 1982 to 2000), has for being “entitled”. This is a fun topic.

I’ve written about why so-called “job hopping” doesn’t deserve to be stigmatized. Don’t get me wrong: if someone leaves a generally good job after 9 months only because he seeks a change of scenery, then he’s a fucking idiot. If you have a good thing going, you shouldn’t seek a slightly better thing every year. Eventually, that will blow up in your face and ruin your life. Good jobs are actually kinda rare. I repeat: if you find a job that continues to enhance your career and that doesn’t make you unhappy, and you don’t stick with it for a few years, then you’re an idiot. You should stay when you find something good. A genuine mentor is rare and hard to replace. That’s not what I’m talking about here.

The problem? Most jobs aren’t good, or don’t make sense for the long term. Sometimes, the job shouldn’t exist in the first place, provides no business value, and is terminated by one side or the other, possibly amicably. Sometimes, the boss is a pathological micromanager who prevents his reports from getting anything done, or an extortionist thug who expects 100% dedication to his career goals and gives nothing in return. Sometimes, people are hired under dishonest pretenses. Hell, I’ve seen startups hire three people at the same time for the same leadership position, without each other’s knowledge of course. (In New York, that move is called “pulling a Knewton.”) Sometimes, management changes that occur shortly after a job is taken turn a good job into an awful one. This nonsense sounds very uncommon, right? No. Each of these pathologies is individually uncommon, but there are so many failure modes for an employment relationship that, taken in sum, they are common. All told, I’d say that about 40 percent of jobs manage to make it worthwhile to keep showing up after 12 months. Sometimes, the job ends. It might be a layoff for business reasons. Sometimes it’s a firing that may not even be the person’s fault. Most often, it’s just pigeonholing into low-importance, career-incoherent work, leaving the person to get the hint that she wasn’t picked for better things and leave voluntarily. Mostly, this political injection is random noise with no correlation to personal quality. Still, I think it’s reasonable to say that 60% of new jobs fail in the first 12 months (even if many go into a “walking dead” state where termination is not a serious risk, but in which it’s still pointless and counterproductive to linger). That means 13 percent of people are going to draw four duds for reasons that are no fault of their own. One in eight people, should they do the honest and mutually beneficial thing which is to leave a job when it becomes pointless, becomes an unemployable job hopper. Seriously, what the fuck?

So let me get one thing out there. Not only is the “job hopping” stigma outdated, it’s wrong and it’s stupid. If you still buy into the “never hire job hoppers” mentality, you should fucking stop using your company as a nursing home and instead, for the good of society, use an actual nursing home as your nursing home. I’m serious. If you really think that a person who’s had a few short-term jobs deserves to be blacklisted over it when the real corporate criminals thrive, then letting you make decisions that affect peoples’ lives is like letting five-year-olds fly helicopters, and you should get the fuck out of everything important before you do any more damage to peoples’ lives and the economy. I’m sorry, but if you cling to those old prejudices, then the future has no place for you.

It needed to be said. So I did.

The “job hopping” stigma is one rage point of mine, but let’s move to another: our reputation as an “entitled” Millennial generation. Really? Here are some of the reasons why we’re considered entitled by out-of-touch managers:

  1. We “job hop” often, tending to have 4 to 6 jobs (on average) by age 30.
  2. We expect to be treated as colleagues and proteges rather than subordinates.
  3. After our first jobs, we lose interest in “prestigious” institutions, instead taking a mercenary approach that might favor a new company, or no company. 
  4. We push for non-conventional work arrangements, such as remote work and flex-time. If we put in 8 hours of face time, we expect direct interest in our careers by management because (unlike prior generations who had no choice) we consider an eight-hour block a real sacrifice.
  5. We question authority.
  6. We expect positive feedback and treat the lack of it as a negative signal (“trophy kids”).

Does this sound entitled? I’ll grant that there’s some serious second-strike disloyalty that goes on, with a degree of severe honesty (what is “job hopping” but an honesty about the worthlessness of most work relationships?) that would have been scandalous 30 years ago, but is it entitled? That word has a certain meaning, and the answer is “no”.

To be entitled, as a pejorative rather than a matter-of-fact declaration about an actual contractual agreement, implies one of two things:

  1. to assume a social contract where none exists (i.e. to perceive entitlement falsely.)
  2. to expect another party to uphold one side of an existing (genuine) social contract while failing to perform one’s own (i.e. one-sided entitlement).

Type I entitlement is expressed in unreasonable expectations of other people. One example is the “Nice Guy Syndrome“, wherein a man expects sexual access in return for what most people consider to be common courtesy. The “Nice Guy” is assuming a social contract between him and “women” that neither exists nor makes sense. Type II is the “culture of entitlement” sometimes associated with a failed welfare state, wherein generationally jobless people– who, because they have ceased looking for work, are judged to be failing their end of the social contract– continue to expect social services. These are people whose claims are rooted in a genuine social contract– the welfare state’s willingness to provide insurance for those who continually try to make themselves productive, but fail for reasons not their fault– but don’t hold up their end of the deal.

So, do either of these apply to Millennials? Let me assess each of the six charges above.

1. Millennials are “job hoppers”. There’s some truth in that one. The most talented people under 30 are not going to stick around in a job that hurts their careers. We’re happy to take orders and do the less interesting work for a little while, if management assists us in our careers, with an explicit intent to prepare us for more interesting stuff later. Failing that, we treat the job as a simple economic transaction. We’re not going to suffer a dues-paying evaluative period for four years when another company’s offering a faster track. Or, if we’re lucky, we can start our own companies and skip over the just-a-test work entirely and do things that actually matter right away. Most of us have been fired or laid off “at will” at least once, and we have no problem with this new feature (job volatility) of the economy. None of us consider lifelong employment an entitlement or right. We don’t expect long-term loyalty, nor do we give it away lightly.

2. Millennials “expect” to be treated as proteges. Not quite. Being a cosmopolitan, well-studied generation exposed to a massive array of different concepts and behaviors from all over the world, we expect very little of other people. We’ve seen so much that we realize it’s not rational to approach people with any major assumptions. The world is just too damn big and complicated to believe in global social contracts. Getting screwed doesn’t shock or disgust or hurt us. It doesn’t thwart our expectations, because we don’t really have any. We simply leave, and quickly. For us, long-term loyalty is the exception, and yes, we’re only going to stay at a job for 5 years if it continues to be challenging and beneficial to our careers. That’s not because we “expect” certain things, and we aren’t “making a statement” when we change jobs. It’s not personal or an affront or intentional “desertion”. We can do better, that’s all.

3. Millennials don’t have respect for prestige and tradition. Yes and no. We don’t start out that way. The late-2000s saw one of the most competitive college admissions environments in history. Then there’s the race to get into top graduate departments or VC-darling startups or investment banking– the last of these being the Ivy League of the corporate world. Then something happens. Around 27, people realize that that shit doesn’t matter. You can’t eat prestige, and many of the most prestigious companies are horrible places to work. Oh, and we think we’re hot shit until we get our asses handed to us by superior programmers and traders from no-name universities and learn that their educations were quite good as well. We realize that work ethic and creativity and long-term diligence and deliberate practice are the real stuff and we lose interest in slaving away for 90 hours per week just because a company has a goddamn name.

4. Many of us expect non-conventional work/life arrangements. This is true, and there’s a reason for it. What is the social contract of an exempt salaried position, under which hourage expectations are only defined by social expectations rather than contract? As far as I can tell, there are two common models. Model A: worker produces enough work not to get fired, manager signs a check. Model B: worker puts a serious investment of self and emotional energy into the work as a genuine working relationship would involve, and management returns the favor with career support and coherence. Under either model, the 8-hour workday is obsolete. Model A tells us that, if a worker can put in a 2-hour day and stay employed, he’s holding up his end of the deal, and it’s management’s fault for not giving him interesting work that would motivate him to perform beyond the minimum. Model B expects a mutual contract of loyalty to each other’s interests, but does not specify a duration or mode of work. Model B might be held to generally support in-office work with traditional hours, for the sake of collaboration and mentoring, but that opens up a separate discussion, especially in the context of individual differences regarding when and how people work best.

5. Millennials question authority. True, and that’s a virtue. Opposing authority because it is authority is no better than being blindly (or cravenly) loyal to it, but questioning it is essential. People who are so insecure that they can’t stand to be questioned should never be put in leadership positions; they don’t have the cojones for it. I question my own ideas all the time; if you expect me to follow you, then I will question yours. It’s a sign of respect to question someone’s ideas, not a personal challenge. It’s when smart people don’t question your ideas that you should be worried; it means they’ve already decided you’re an idiot and they will ignore or undermine you. 

6. We expect positive feedback and respond negatively to a lack of acknowledgement. That’s true, but not because we believe “everyone’s a winner”. If anything, it’s the opposite. We know that most people lose at work and would prefer to play a different game when that appears likely to happen. No, it’s not about “trophies”. A trophy is a piece of plastic. We get bored unless there’s a real, hard-to-fake signal that we aren’t wasting our time. Not a plastic trophy, but management that takes our career needs seriously and complete autonomy over our direction. We know that most people, in their work lives, end up with incompetent or parasitic bosses who waste years of their time on career-incoherent wild goose chases, and we refuse to be on the butt of that joke. Does this mean that we’re not content to be “average”, and that we require being on the upside of a zero-sum executive favoritism to stay engaged with our work? Well, in order to have it not be that way, you need to create a currently-atypical work environment where average people don’t end up as total losers. With all the job hopping we do, we don’t care about relative measures of best or better. We want good. Make a job good and people won’t worry about what others around them are getting.

I think, with this exposition, that there’s a clear picture of the Millennial attitude. Yes, we take second-strike disloyalty to a degree that, even ten years ago, would be considered insolent, brazen, and even reckless in the face of the career damage done (even now) to the job-hoppers. We’ve grown bolder, post-2008. Quit us, and we quit. It’s not that we like changing jobs every few months– believe me, we fucking don’t. We’re looking for the symbiotic 5- or 10-year-fit, as any rational person would, but we’re not going to lie to ourselves for years– conveniently paying dues on evaluative nonsense work while our bosses spend half-decades pretending to look for a real use for our underutilized talents (only to throw us out in favor of fresher, more clueless, younger versions of ourselves)– after drawing a dud.

Is the Millennial attitude exasperating for older managers, used to a higher tolerance for slack on matters of career coherency? I’m sure it is. I’m sure that the added responsibility imposed by a generation characterized by fast flight is unpleasant. It is not, however, entitled. It’s not Type I entitlement because we don’t assume the existence of a social contract that was never made. We only hold employers to what they actually promise us. If they entice us with promises of career development and interesting work, then we expect that. If they’re honest about the job’s shortcomings, we respect that, too. But we only expect the social contract that we’re explicitly given. I’d also argue that it’s not Type II entitlement because Millennials are, when given proper motivation, very hard-working and creative. We want to work. We want genuine work, not bullshit meetings to make the holder of some sinecure feel important.

What are we, if not “entitled”? We’re the opposite. We’re a disentitled generation. We never believed in the corporate paternalist social contract, and most of us are comfortable with this brave new world that has followed its demise. Yes, we’re mercenary. We respond in kind (in fact, often disproportionately) to genuine loyalty, but we’re far too damn honest to pretend we’re getting a good deal when we’re thrown into a three-year dues-paying period rendered obsolete in a world where fast advancement is possible and fast firing is probable for those who don’t advance. I’m in software, where, by age 35, becoming a technical expert (you need a national reputation in your specialty if you want to be employable as a programmer on decent terms by that age) or an executive becomes mandatory. As this leaves 13 years to “make a mark”, one simply will not find people willing to endure a years-long dues-paying period that one would want to hire. Asking someone to risk 2 of those 13 years on dues-paying (that might lead nowhere) is like asking a person to throw 15 percent of her net worth into a downside-heavy investment strategy with no potential for diversification– a bad idea. Reasonable dues-paying arrangements may have existed under the old corporate social contract of cradle-to-grave institutional employment, but that’s extinct now. So should be the “job hopper” stigma and the early-stage dementia patients who still believe in it.

Blub vs. engineer empowerment

No, I’m not quitting the Gervais / MacLeod Series. Part 23, which will actually be the final one because I want to get back to technology in how I spend spare time, is half-done. However, I am going to take a break in it to write about something else. 

I’ve written about my distaste for language and framework wars, at least when held for their own sake. I’m not fading from my position on that. If you tell go off and tell someone that her favorite language is a U+1F4A9 because it’s (statically|dynamically) typed, then you’re just being a jerk. There are a few terrible languages out there (especially most corporate internal DSLs) but C, Python, Scala, Lisp and Haskell were all designed by very smart people and they all have their places. I’ve seen enough to know that. There isn’t one language to rule them all. Trust me.

Yet, I contend that there is a problem of Blub in our industry. What’s Blub? Well, it’s often used as an epithet for an inferior language, coined in this essay by Paul Graham. As tiring as language wars are, Blubness is real. I contend, however, that it’s not only about the language. There’s much more to Blub.

Let’s start with the original essay and use Graham’s description of Blub:

Programmers get very attached to their favorite languages, and I don’t want to hurt anyone’s feelings, so to explain this point I’m going to use a hypothetical language called Blub. Blub falls right in the middle of the abstractness continuum. It is not the most powerful language, but it is more powerful than Cobol or machine language.

And in fact, our hypothetical Blub programmer wouldn’t use either of them. Of course he wouldn’t program in machine language. That’s what compilers are for. And as for Cobol, he doesn’t know how anyone can get anything done with it. It doesn’t even have x (Blub feature of your choice).

As long as our hypothetical Blub programmer is looking down the power continuum, he knows he’s looking down. Languages less powerful than Blub are obviously less powerful, because they’re missing some feature he’s used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn’t realize he’s looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.

When we switch to the point of view of a programmer using any of the languages higher up the power continuum, however, we find that he in turn looks down upon Blub. How can you get anything done in Blub? It doesn’t even have y.

By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one. (This is probably what Eric Raymond meant about Lisp making you a better programmer.) You can’t trust the opinions of the others, because of the Blub paradox: they’re satisfied with whatever language they happen to use, because it dictates the way they think about programs.

So what is Blub? Well, some might read that description and say that it sounds like Java (has garbage collection, but not lambdas). So is Java Blub? Well, not quite. Sometimes (although rarely) Java is the right language to use. As a general-purpose language, Java is a terrible choice; but for high-performance Android development, Java’s the best. It is not James Gosling’s fault that it became the go-to language for clueless corporate managers and a tool-of-choice for mediocre “commodity developers”. That fact may or may not be related to weaknesses of the language, but it doesn’t make the language itself inferior.

Paul Graham looks at languages from a language-designer’s viewpoint, and also with an emphasis on aesthetics. As an amateur painter whose original passion was art, that shouldn’t surprise us. And in my opinion, Lisp is the closest thing out there to an aesthetically beautiful language. (You get used to the parentheses. Trust me. You start to like them because they are invisible when you don’t want to see them, but highlight structure when you do.) Does this mean that it’s right for everything? Of course not. If nothing else, there are cases when you don’t want to be working in a garbage-collected language, or when performance requirements make C the only game in town. Paul Graham seems to be focused on level of abstraction, and equating the middle territory (Java and C# would take that ground, today) with mediocrity. Is that a fair view?

Well, the low and high ends of the language-power spectrum tend to harbor a lot of great programmers, while the mediocre developers tend to be Java (or C#, or VB) monoglots. Good engineers are not afraid to go close to the metal, or far away from it into design-your-own-language land, if the problem calls for it. They’re comfortable in the whole space, so you’re more likely to find great people at the fringes. Those guys who write low-latency trading algorithms that run on GPUs have no time to hear about “POJOs“, and the gals who blow your mind with elegant Lisp macros have no taste for SingletonVisitorFactories. That said, great programmers will also operate at middling levels of abstraction when that is the right thing to do.

The problem of Blubness isn’t about a single language or level of abstraction. Sometimes, the C++/Java level of abstraction sometimes is the right one to work at. So there certainly are good programmers using those languages. Quite a large number of them, in fact. I worked at Google, so I met plenty of good programming using these generally unloved languages.

IDEs are another hot topic in the 10xers-versus-commodity-engineers flamewar. I have mixed feelings about them. When I see a 22-year-old settling in to his first corporate job and having to use the mouse, that “how the other half programs” instinct flares up and I feel compelled to tell him that, yes, you can still write code using emacs and the command line. My honest appraisal of IDEs? They’re a useful tool, sometimes. With the right configuration, they can be pretty neat. My issue with them is that they tend to be symptomatic. IDEs really shine when you have to read large amounts of other peoples’ poorly-written code. Now, I would rather have an IDE to do than not have one (trust me; I’ve gone both ways on that) but I would really prefer a job that didn’t involve trudging though bad legacy code on a daily basis. When someone tells me that “you have to use an IDE around here” I take it as a bad sign, because it means the code quality is devastatingly bad, and the IDE’s benefit will be to reduce Bad Code’s consumption of my time from 98% to 90%– still unacceptable.

What do IDEs have to do with Blub? Well, IDEs seem to be used often to support Blubby development practices. They make XML and Maven slightly less hideous, and code navigation (a valuable feature, no disagreement) can compensate, for a little while, for bad management practices that result in low code quality. I don’t think that IDEs are inherently bad, but I’ve seen them take the most hold in environments of damaged legacy code and low engineer empowerment.

I’ve thought a lot about language design and languages. I’ve used several. I’ve been in a number of corporate environments. I’ve seen good languages turn bad and bad languages become almost tolerable. I’ve seen the whole spectrum of code quality. I’ve concluded that it’s not generally useful to yell at people about their choices of languages. You won’t change, nor will they, and I’d rather work with good code in less-favored languages than bad code in any language. Let’s focus on what’s really at stake. Blub is not a specific language, but it is a common enemy: engineer disempowerment.

As technologists, we’re inclined toward hyperrationality, so we often ignore people problems and mask them up as technical ones. Instead of admitting that our company hired a bunch of terrible programmers who refuse to improve, we blame Java, as if the language itself (rather than years of terrible management, shitty projects, and nonexistent mentorship) somehow jammed their brains. Well, that doesn’t make sense because not every Java programmer is brain damaged. When something goes to shit in production, people jump to the conclusion that it wouldn’t have happened in a statically-typed language. Sorry, but that’s not true. Things break in horrible ways in all kinds of languages. Or, alternatively, when development is so slow that every top-25% engineer quits, people argue that it wouldn’t have happened in a fast-prototyping, dynamically-typed language. Wrong again. Bad management is the problem, not Scala or Python or even Java.

Even terrible code isn’t deserving of the anger that’s directed at it. Hell, I’ve written terrible code, especially early in my career. Who hasn’t? That anger should be directed against the manager who is making the engineer use shitty code (because the person who wrote it is the manager’s favorite) and not at the code itself. Terrible romance novels are written every day, but they don’t anger me because I never read them. But if I were forced to read Danielle Steele novels for 8 hours per day, I would fucking explode.

Ok, that’s enough negativity for a while…

I had a bit of a crisis recently. I enjoy computer science and I love solving hard problems. I enjoy programming. That said, the software industry has been wearing me down, this past couple of years. The bad code, low autonomy, and lack of respect for what we do is appalling. We have the potential to add millions of dollars per year in economic value, but we tend to get stuck with fourth quadrant work that we lack the power to refuse. I’ve seen enough of startups to know that most of them aren’t any better. The majority of those so-called “tech startups” are marketing experiments that happen to involve technology because, in the 21st century, everything does. I recently got to a point where I was considering leaving software for good. Computer science is fine and I have no problem with coding, but the corporate shit (again, just as bad in many startups) fries the brain and weakens the soul.

For some positivity, I went to the New York Clojure Meetup last night. I’ve been to a lot of technology Meetups, but there was a distinct feel at that one. The energy was more positive than what I’ve seen in many technical gatherings. The crowd was very strong, but that’s true of many technical meetups. Here, there was a flavor of “cleaner burning” in addition to the high intelligence that is always the case at technology meetups. People weren’t touting one corporate technology at the expense of another, and there was real code– good code, in fact– in a couple of the presentations. The quality of discussion was high, in addition to the quality of the people.

I’d had this observation, before, about certain language communities and how the differences of those are much greater than differences in language. People who intend to be lifelong programmers aren’t happy having New Java Despondency Infarction Framework X thrown at them every two years by some process-touting manager. They want more. They want a language that actually improves understanding of deep principles pertaining to how humans solve problems. It’s not that functional programming is inherently and universally superior. Pure functional programming has strong merits, and is often the right approach (and sometimes not) but most of what makes FP great is the community it has generated. It’s a community of engineers who want to be lifelong programmers or scientists, and who are used to firing up a REPL and trying out a new library. It’s a community of people who still use the command line and who still believe that to program is a virtue. The object-oriented world is one in which every programmer wants to be a manager, because object-orientation is how “big picture guys” think.

I’m very impressed with Clojure as a language, and that community has made phenomenally good decisions over the past few years. I started using it in 2008, and the evolution has been very positive. It’s not that I find Clojure (or Lisp) to be inerrant, but the community (and some others, like Haskell’s) stands in stark contrast against the anti-intellectualism of corporate software development. And I admire that immensely. It’s a real sacrifice that we 1.5+ engineers make on an ongoing basis when we demand that we keep learning, do things right, and build on sound principles. It doesn’t come easy. It can demand unusual hours, costs us jobs, and can put us in the ghetto, but there it is.

In the mean time, though, I don’t think it’s useful to mistake language choice as the prevailing or most important issue. If we do that, we’re just as guilty of cargo cultism as the stereotypical Java-happy IT managers. No, the real issue that matters is engineer empowerment, and we need to keep up our culture around that.

Gervais / MacLeod 22: Inferno

In Part 21, I wrote a summary of the modern Organizational Problem. For a recap of the highlights:

  • As machines take over boring, commoditized work, the only stuff left for humans is convex work where enabling excellence is more important than excluding failure, which is not even possible if the work is difficult enough to be interesting. Traditional, risk-reductive approaches to management fail on convex work. 
  • Companies evolve, due to the inevitable corruption attendant to their internal credibility markets, toward a sociological state that is internally stable (due to the Effort Thermocline) but renders it unable to compete on the market, and prone to moral abandon. This either drains them slowly (rank culture) or causes them to lapse into ethical depravity (Enron) that brings down the whole house.

Most of my focus, in Part 21, was on the macroscopic, impersonal forces that act on organizations. I mentioned conflict between lawful evil and chaotic good, as well as the ancient mechanisms (induced depression) through which lawful evil asserts itself, both briefly, but now I’m going to do a deep dive into the micro-scale illnesses of the organization, tunneling through all the layers, in an attempt to find solutions at each level.

I’ve dedicated quite a few chapters to trust. I’m fairly confident that I’ve solved the root financial problems already. I’ve also discussed the toxicity of distrust. We’ve covered a lot of ground, both technical and soft. What I believe I have achieved is to unite the sociological, the economical, the moral, and the financial elements of the organization. We’re almost ready to Solve It: to tackle the Organizational Problem. In Part 21, I summarized the macroscopic details of organization decay (industrial and moral) but the problems are most tractable at the microscopic scale. We need to descend into the often personal hell of corporate life.

The structure of our journey through the problem, stratum by stratum, I’ll put in this part (Part 22). I have, however, needed to split what was intended as a “final post” into two. Part 23 will discuss the solution and follow the same structure.

Let’s waste no time in getting ourselves to the gates of Corporate Hell.

First Circle: Opacity

We come to Limbo, the first circle, where we confront the sin of opacity. Are most people in Corporate America being fairly compensated? I’d argue that they’re not, but the real crime isn’t what they’re being paid. It’s that they’re deprived of so much information that they have almost no insight into their actual value, and no leverage.

Ultimately, no one knows what human labor (or any asset) is “worth”. It’s generally impossible to come up with fair values for everything in a coherent way. That’s why markets exist: because valuation is an extremely difficult computational problem that’s best performed by “selfish” actors (investors and arbitrageurs) equipped to take advantage of distributed knowledge. It’s easy to compute a “fair value” for commoditized material assets. For human labor, it’s much harder. Finding a fair value for it is inherently difficult even under the best conditions.

On the market for human labor, most people can’t tolerate the volatility that would be seen on a highly-liquid exchange market (e.g. the U.S. equity market) where values shift by 20 to 30 percent per year. Most people would not be able to survive, at current compensation levels, if the labor market had that kind of volatility. So liquidity (which makes commodity markets more liquid and efficient) is not something most would desire. It would have their talents reallocated (i.e. job changes) on a monthly basis.

People (even those who love capitalism uncritically) have a hard time believing in markets. Can a $200-billion company really lose or gain $1 billion in true value in a day? Well, there’s a paradox inherent in markets, which is that price volatility and fairness seem to be (surprisingly) positively correlated. A price can absorb lots of signals and exhibit Brownian drift (which is probably harmless, in the long term) that makes it appear inconsistent because, clearly, the true value didn’t change that much in so little time. Or, it can absorb very little signal and have more superficial consistency, but less fairness, insofar as this begets illiquidity, “custom pricing“, and high premiums for middlemen. Markets either have to be pseudo-inconsistent (prices fluctuate based on small margins of supply and demand that are often almost random in their time of emergence) and fair, or consistently unfair. Most peoples’ salaries are set by the latter type of market, with unfairness layered on by asymmetries in information.

With human labor, people are stuck in a bad position. I am in support of a basic income, but that’s not the world we live in. The need for a monthly income puts them into a state of extreme risk aversion– devastating and pernicious, but so ubiquitous that people fail to recognize it as perverse; it’s just usual. While they’d make more (on average) if they could supply services directly to the market, the income volatility that doing so would involve is much more than most people have the financial means to stomach. They’d rather get a consistent low rate for their labor (paid during sickness and on vacation and, if one excludes at-will termination, regardless of uncontrollable fluctuations in work quality) than deal with the vicissitudes of an impersonal market that only cares about what they produce, even if the latter is much better for them in the long run (and might deliver savings that leave them able to escape the corporate shackles).

What’s the problem? It’s not that wages are “unfairly” low. I can’t even assess whether that’s the case. Employment is an insurance trade, and low wages exist because of the risk premium. What’s a “fair” risk premium? One can’t assess that without building a market. We don’t know, and that’s the problem. The crime is that people really don’t know how much genuine risk reduction (if any) they’re getting. Since they can be fired “for performance”, while most white-collar jobs make performance impossible to measure objectively, I’d say it’s very little. There’s also a very strong argument to be made that labor is unfairly treated because a few major players on the other side control the market. So I have very strong suspicion that most of these trades are unfair but, without a market to appeal to, there’s no proof.

The evil is in opacity. People enter the MacLeod Loser trade– taking a subordinate role in an established organization, rather than engaging with the market directly– in order to get rid of financial risk that most people have too little wealth to tolerate. In exchange, they get low wages that keep them in financial semi-desperation. They don’t know how much risk-reduction they’re actually getting (at-will employment) and, because the market is so tightly locked-down by major players, they don’t know what a risk-neutral fair price for their work is, so they can’t assess what they’re paying their employers for this insurance. Are they getting screwed? Probably, but I can’t prove that because it’s impossible to compute with a fair price. I can prove another evil: they have no way of knowing whether they’re getting screwed. If they could evaluate their own deals and judge them fair, I wouldn’t be one to argue with them; but they will never be in access to that information. Management keeps such a tight-lipped approach to everything important– compensation, personnel policies, promotion guidelines, career planning– and, worse yet, brings brutal punishments onto those who share such information. Although this is technically illegal, many companies make it a fireable offense to disclose one’s own compensation.

With opacity, the severe asymmetry of information leaves one side unable to evaluate the fairness of the deal. The deal might be totally fair, but often it won’t be. This is why I made such a strong argument in favor of transparency in compensation. The poison of opacity must be driven out with force.

Opacity isn’t only about the financial aspect. It’s also about domination (managerial mystique). The manager knows the employee’s salary, but not vice versa. That’s intentional. Most companies bring their people into submission by hiding important information, scaring people into disadvantageous panic-trading, and taking the (highly profitable) other side. For concave labor, this didn’t damage operations too much; for convex work, it does. People need a certain amount of empowerment and information to do convex work well. They rarely get it, because management intervenes. I won’t say that management has no role in the technological-era, convex-labor world; but it will have to become a more dignified, advisory role involving the provision of direction and mentoring, rather the carrot-and-stick extortion that exists now.

Most corporations evolve a set of rent-seeking high officers who rob investors and employees alike. They plunder investors through misappropriation of capital and dishonest representation of risk, while they use opacity over all important information to scare employees into terrible, panic-driven trades and subordination. Where does this lead? We must look at the winners of this trade, and that takes us right into the Second Circle: parasitism.

Second Circle: Parasitism

Every organization has people in it who have ceased to contribute, but continue to hold important roles and draw high compensation. Purges of “deadwood” (or “low performer” witch hunts) are usually directed at the bottom, but the worst problem employees are always people at the top (who’ve ceased to think of themselves as “employees”; they’re executive royalty!) Sure, there are small-scale subtracters at the bottom; but the worst are usually dividers at the top who suck all life out of the firm. Parasitism and even outright theft occur at all levels, but there’s a point (Effort Thermocline) where their prevalence increases sharply.

Low and opaque wages, fast firing that negates the promised risk reduction, and a general lack of respect for employees, all represent the “unfair” aspects of the corporation that we know and hate. There’s also an assumption that “the assholes at the top” are capturing large amounts of surplus value. That’s often right. Below the Effort Thermocline, value is created; above it, it’s captured.

Why do companies tolerate a class of rent-seeking parasites who add nothing, when it might be better for morale to fire them all and redistribute the proceeds to employees (profit sharing) and investors? Well, it turns out that much of regular economics (going back to Marx) makes a fatal error, which is to conflate labor and management, the latter being a subset of the former. On paper, that’s true. Sociologically, it’s not: managers do not see themselves as labor, and do not act as such, and are not viewed as labor by other workers. By the technical terminology, CEOs are still “labor”, even though their compensation is not set by a fair market (but through self-serving deal-trading with other CEOs on whose boards they sit). In their minds, executives are the real owners. Here’s a breakdown of the corporation, with square brackets ([]) representing terminal nodes:

(Company)-----+
.             |
.     +-------+---------------+
.     |                       |
. [Capital]                   |
.                         ("Labor")-----------+
.                             |               |
.      [Executives]--------(Mgmt.)            |
.      a.k.a. Sociopaths      |               |
.                             |               |
~~~~~~~~~~~~~~~~~~~~~ EFFORT THERMOCLINE~~~~~~~~~~~~~~
.                             |               |
.                     [Middle Managers]       |
.                     a.k.a. Clueless         |
.                                             |
.                                         [Workers]
.                                         a.k.a Losers

The old Marxist way of looking at the company has two tiers: bourgeoisie and proletariat. Capital vs. labor. That made sense when industrial processes were simple enough that anyone who held capital could manage them. If you wanted a good vs. evil narrative, you could equate capital to the rent-seeking slaveowners who had oppressed humankind for millennia, and labor to the slaves, and conditions for workers were so poor that you’d essentially be right. However, as the factories and machines and operations became more complicated, owners had to put professional, non-owning managers on the payroll, and that created the three-tier company: owners vs. managers vs. workers.

That gets us to three tiers, but there’s a distinct change of flavor between upper management (executives) and the floor-holding mere managers in the middle, who are still accountable for doing work; how’d we end up with four?

First, it should be obvious that managers are on the advantageous side of a principal-agent problem with investors because they control the books. While investors have right to interrogate management, being the owners of the enterprise, they rarely know what questions to ask. Second, they have even more advantages over the workers, being able not only to fire them but also (if daring enough to risk a lawsuit) able to inflict long-term damage using work’s feudalist reputation economy. They have a position of power over both sides, and one that can be very profitably (for them) exploited.

I would also argue that workers are investors. The modern concept of the career developed as an antidote to this socially unstable dynamic of owners against workers. For many people, that sharp dichotomy between the two categories is deeply anachronistic, because workers have the option of moving up to higher-skilled labor in the modern, fluid economy. “Worker” is no longer a permanent class, at least in theory. So workers are investors of time. Finally, with public equity ownership and 401(k) plans, they are literally investors.

Yet there’s a lot of opacity in the labor market, especially pertaining to careers. Is the career game a free market, or a feudal reputation economy? When you have what claims to be a market economy, but that uses opacity as a tool of exploitation, and clearly copied half of its pages from pernicious feudalism, there’s a lot of fear that can be exploited. High levels of ambient fear will separate people into “protectors”, vassals, and peasants. Out of an anarchic power vacuum that cannot last for long– peoples’ nerves can only take so much– brutal strongmen rise into power. That’s where executives come from.

Executives are a subset of managers who arrogate themselves to be the real owners of the company, more important than passive investors and simply more powerful than workers. The owners-versus-workers dichotomy comes back, this time between rent-seeking, non-producing executives above the Effort Thermocline, and a labor sector that includes the terminal middle-management (Clueless) who failed to include themselves in that arrogation, and the risk-averse non-strivers (Losers).

So, executives are the ones who get to such a level of invulnerability that they command large salaries just for “making decisions”. The morally degenerate, high-status ones most willing to abuse their principal-agent advantage create sinecures where they have lots of power, but no responsibility. Now there’s a four-tier enterprise: investors vs. executives vs. middle managers vs. workers. This mirrors the MacLeod hierarchy precisely (with investors, being organizationally passive, not on the chart).

Managers can rob investors financially by misleading them, and they can steal from employees on the credibility market, or by misleading them about the career-building value of the work they are doing, and those who excel at both tend to develop an outsized social status and become the executive Sociopaths. Those who either refuse or are unable to participate in such robbery will linger in the less dignified middle management tier and become the Clueless.

Investors and (non-executive) employees share a lot in common, in truth. Employees are investors of time, and often literal investors as well. So why are executives so easily able to rob the rest of the company blind? Shouldn’t investors and workers (often the same people, since most workers’ retirement assets are invested in corporate bonds and equities) band together and drop a pipe on that shit?

It’s a nice idea, but it turns out not to be so easy. First, let’s take an investor’s standpoint. Corporate governance is– and I mean this literally and non-pejoratively– a plutocracy. It’s voting, proportional to investment. Unfortunately, any aggregative voting system is at risk of corruption, because there are a lot of passive players who don’t really care either way, and who will be inclined to swing their votes for personal favors: cheap votes. That’s why vote-buying must be made illegal in a democracy: there are a lot of people out there who would swing their votes for $100. Advertising, for one example, is all about capturing the advantage that a brand holds over cheap voters who prioritize product image over quality. The civil danger of cheap votes is that a voting bloc’s power grows as the square (in statistical impact, measured by variance) of its size, which means that people who develop the ability to harness cheap votes and tie them together become extremely powerful, and can hijack the system even if they’re only able to buy a small share of votes. Cheap vote problems are typically solved by electing representatives whose job is not to be cheap and giving them disproportionate power, but also empowering voters to fire them, on the assumption that they’ll do a better job than the political machines that specialize in cheap-vote trading. That’s why management exists. Permanent managers are held to be more effective in running the company than a plutocracy subjected to cheap-vote abuses.

When that runs poorly, management loots and investors lose. In truth, as a cynic, I’d argue that if typical management had its way, there would never be profits. What would be profit would go directly toward the executive payroll.

On the side of labor (true, non-executive labor) there’s a different cheap votes problem. Employees are the cheap votes. How often does a low-level worker take up cause against his employer, risking termination (likely) and damage to her reputation (a possibility, in the feudal reputation economy of references and resumes) in doing so? It’s so rare that it makes the news, and such people are often blacklisted and ruined for doing it. Whistleblowing is an activity where there’s a fixed amount of punishment to be allocated to a variable number of adversaries, which makes isolated whistleblowing so dangerous it rarely happens, and with no one willing to be the first, one sees a culture of terrified silence. A powerful company can pretty easily ruin a single person’s professional reputation– with frivolous lawsuits against her, negative references, and possibly negative statements to the press about the departure. All of this is illegal, but she’s a single person up against a company with limitless resources. On the other hand, if 20 people blow the whistle, the company can’t discredit all of them. It must go into “damage control mode” to repair its image. It will offer generous settlements. If a thousand people act, and talented people start losing faith in the company and leaving, the firm will actually need to change its behavior. However, conditions are such that unethical companies and managers can, like Gus Fring, hide in plain sight, because people are too scared of whistleblowing’s consequences for the opposition ever to get to 20, much less a thousand.

Opacity is justified by expediency (“we can’t have transparent compensation; that would just be crazy“) but it conceals the fact that a powerful set of people abuse information to rob investors and workers to an extreme degree. Is it a conspiracy? No, I wouldn’t go that far. As I said, executives hide in plain sight. They don’t hide the fact, for just one example, that those who fight opacity by openly disclosing their salaries are socially excluded, isolated, and eventually fired for it. (This, also, is illegal. Disclosure of salary is protected; it’s an anti-unionbusting provision.) It’s pretty well known what happens, in the white collar world, to people who disclose their salaries. The true dishonesty pertains not to the social norms, but to their reasons for existing. Managers claim they fire those who engage in salary discussion because it’s “rude” and “threatening to team cohesion”, when their real motivations are more sinister.

Executive parasitism is a huge problem for most companies– much more of one than operational inefficiency or unfavorable market conditions. Its severity is one thing that the trust-averse “Theory X” got right: given too much power, people will turn to parasitism as they focus on protecting what they have. The problem with Theory X is that the gun points the wrong way. In Theory X corporations, prevailing distrust is used to justify abuse of workers by management, while management must be trusted by both sides (workers and investors) have no other choice– they are just too far out of power. Theory X uses prevailing distrust to shift power to those who are least deserving of any trust.

The old, Marxist, model puts investors and workers on two sides of a chasm, with each side despising the other. Theory-X management steps in and tells investors, “Hire us, and we’ll keep your workers from stealing from you”. That’s their sales pitch. (In the modern economy, workers who favor their career goals over the organization’s are seen as “time thieves”; I disagree, but that’s a side note.) It turns out that professional managers are very good at preventing stealing; they want all the action for themselves!

Workers and investors don’t belong on opposite sides of some hadal chasm anymore. We need to recognize our common enemy: looting management. The “workers vs. investors” concept made sense in 1848 when the vast majority of people were not only desperately poor, but locked into dead-end labor with no chance of improvement. There was no such thing as a career or a 401k. It’s not true in 2013. Workers can be well-compensated and treated well if they develop unique skills that give them leverage. Their main obstacle as “career capitalists” is not knowing what the market will value, nor the true long-term needs of society that they might be able to fulfill (later on) for a profit. Their managers certainly have no interest in showing them the way. That brings us into…

Third Circle: Career incoherency

If workers can be viewed as investors (the careerist perspective) then a question arises: why do so many people end up stuck in poor investment strategies that, quite visibly, pay off poorly? A well-managed asset portfolio can appreciate by 6% per year without any work. People (at least in the top 15% by talent, and maybe more) ought to be able, pretty easily, to garner 8 to 15% annual increases (at least for a good 10 to 20 years, at which they’re into very-high-skill labor legitimately wealthy) through their control of one of the most important variables, which is how hard they work. Yet we don’t see that.

Becoming great at something– good enough to make a substantial living on the free market at convex work– requires the proverbial “10,000 hours” of deliberate practice. I don’t care to debate the exact number; that’s certainly the right order of magnitude, and it’s probably within +50% for most fields of endeavor. At 10,000 hours, most people should have independent reputations and credibility, and the right to access the market’s will-to-pay (or, at least, their own company’s) directly rather than through a manager/pimp taking god-knows-what (opacity) percent. That would take 5 years, given a typical 2000-hour work year. Yet most people don’t even get there in 25 years. Why? Most of them are assigned to crappy work which confers no career benefit. They’re putting 2000 hours in at “work”, but they’re not getting deliberate practice. This is the norm in organizational employment, and most people never really get out of the dues-paying period. They’re lucky if they get 200 hours of quality work in a year, which means they never become good at what they do. The lack of developed skill leaves them with no leverage and they can be exploited into perpetuity.

In the long term, and for society, this is devastating. One of the first things an economist learns about the Third World is that cheap labor is a curse, because it makes labor-saving technologies (that would make society richer) too expensive by comparison. Why buy a dishwasher when you can pay someone 30 cents an hour to do it? National elites become addicted, unknowingly, to cheap labor and their countries decline. Why invest in people, for the future, when you can exploit them in the present? The long-term result of this is that everyone (even that national elite) loses. This is more true than ever in the corporate world. Corporate managers are averse to training employees out of a fear that more marketable underlings will be likely to leave them. In reality, the reverse seems to be true. Good people leave jobs because they stop learning, not because they learn “too much” and find better employment elsewhere. I’m an unapologetic job hopper and I would easily stay at a job (perhaps for 10 years, perhaps until retirement) if I genuinely believed I was improving my market value by 20% per year. But if I’m not learning anything, then my rate of growth is negative 5 percent per year, which I just won’t tolerate.

Career coherency exists when one’s job requirements also serve one’s career– there is no conflict between the immediate work assignments and the person’s long-term career goals. People do their best work when career coherency is the case, and (except for the MacLeod Clueless) they will do as little as they can get away with amid incoherency.

Of course, career incoherency is depressingly common. Companies tend to load the junior or politically unsuccessful with fourth quadrant work that “just needs to get done”, so people get resentful and leave. The truth is that only the MacLeod Clueless take career-incoherent assigned work very seriously. Losers manage themselves to the Socially Accepted Median Effort (SAME) but slack off beyond that point, while Sociopaths often blow it off entirely. A common Sociopath response to being assigned career-incoherent work is to fail badly at it, but in such a way that the blame can’t be directly assigned to him (to use Venkat Rao’s terminology, a “Hanlon dodge”). This is a hard balance to strike, but extremely powerful when it works. What eventually happens is that management, should it distract him with crappy work, is punished just enough that future crappy work is sent elsewhere, but not so severely that it makes the Sociopath look incompetent or noncompliant. He performs poorly while retaining plausible deniability, and it helps if management is a little bit scared of him (“if I keep giving him low-yield assignments and he fails, maybe he’ll blame me“). The Sociopath figures out just how far he will get the benefit of the doubt, and plays accordingly.

In general, the best and worst employees of a company tend toward a self-executive– meaning that they serve investors’ interests and their own directly, ignoring the interference of parasitic executives– and almost insubordinate pattern of behavior. What about the moral and industrial middle classes? Those are the ones most affected by the prevailing culture. In a rank culture, they’ll be compliant and superficially loyal. In a tough culture, they’ll be viciously competitive. In a self-executive culture, they’ll tend to be technocratic and organizationally altruistic. The very good and very bad are pretty much the same whereever one goes, and both categories will blow off career-incoherent management and work (although for very different reasons) so it tends to be people in the middle who define, and also who are most defined by, the corporate culture.

What happens if a company shuts off self-executivity? Do the very-good and very-bad stop being insubordinate? Absolutely not. The very good are pushed into increasingly desperate public stands for what is right. It should be predictable what happens to them: they get fired, or humiliated so badly that they must resign. The very bad, however, tend to be good enough at playing the people to stick around. When self-executivity is outlawed, only outlaws will be self-executive (i.e. be able to get anything done). The result of this is an environment similar to the violent black markets that emerged in the Soviet Union; the shadowy nature of transactions puts a lot of otherwise unconnected baggage and friction on them. Even staid products like lightbulbs, when there was a public shortage, often had to be obtained from characters more like speakeasies than hardware stores. A modern analogue is the market for illegal drugs. Yes, most of the products are poisonous, but the violence surrounding this economic activity has a lot more to do with the illegality of the trade than the toxicity of the product, which most serious players in the business never even use.

When you shut off self-executivity, you shut off internal innovation– to a point. The very-good insubordinates will still tilt at windmills, and be summarily fired for the quixotry. The very-bad will furtively pursue innovations on their own, but with malevolent intent. That’s how you get evil innovations like Google’s “calibration scores”, which were an obvious (and, sadly, successful) move to sabotage the company. Those are the direct product of what happens when self-executivity is pushed to the black market.

In the short term, under desperate circumstances, debate and self-executivity must be curtailed out of the need for focused dedication toward a coherent definition of corporate success. This is one reason why, while I support open allocation, I recognize it as inappropriate (at least at full extent) for the needs small, bootstrapped companies. They should pursue open allocation in spirit and values, but shipping a coherent product takes first priority. However, large and wealthy technology companies have a moral responsibility to implement open allocation; if they shirk it, they not only fail morally but, because they cease to have a culture worth caring about, they also demolish themselves through brain drain.

Why do companies shut down self-executive behavior? Why aren’t employees allowed to manage their careers and contribute directly to the company’s internal market? Self-executive work is, empirically, typically worth at least three times as much as traditionally managed work. (In software, it’s more like 10, hence the phrase “10x programmer”.) So shouldn’t there be infinite demand for something that makes everyone 3+ times more productive? Well, to answer that one, we must progress further into corporate hell…

Fourth Circle: False scarcity

One prevailing trait of the Clueless is that they never question arguments from scarcity or desperation. “Deadlines” must be met, and “there’s no money in the budget” is taken at face value. This enables the Sociopaths at the top to create a false scarcity that’s empowers the Clueless to do reprehensible things that they otherwise wouldn’t. It’s the opposite of 1984. In Orwell’s depiction of totalitarian socialism, people were misled with false claims of prosperity. Corporations go the other way, by creating a phony scarcity while million-dollar bonuses are funneled to the executive thugs who enforce it.

That’s where career incoherency often comes from. The company “just can’t afford” to do things properly, to compensate fairly, or to invest in employees. Things just need to be done, this way, and now. Debate and a progressive outlook can’t be afforded. The “emergency mode” in which there might be justifiable cause for curtailing employee autonomy and long-term concerns becomes permanent.

Why does this actually work? It seems counterintuitive. Soviet governments lied about being rich and exaggerated growth figures to improve morale; American corporations claim to be poorer than they are. Why? Wouldn’t that damage morale?

I lack a more global perspective on this, but I think that false scarcity is most effective in an American setting, where individual prospects trump corporate morale. People aren’t going to be especially bothered by the idea that their company is poor or unsuccessful, as long as they have a good place in it. They’d rather the firm be rich, obviously, but they care more about their individual station and long-term career prospects than the organization itself. Caring about the macroscopic reputation of one’s firm is a luxury for real members of it (executives, who have something to lose if it goes down). What is the firm’s prestige, they ask, going to do for me? Americans (moreso than other nationalities) are happy to work for unsuccessful, uninspiring, and even desperate institutions if there’s personal gain (compensation, promotions, career opportunities) to be made in doing so. One major exception is academia, where social status is somewhat divorced from compensation. In the rest of the economy, people don’t mind working for macroscopically mediocre institutions if they have an inside track to a legitimate role.

In this way, companies don’t lie to their own people about how successful or strong they are. Instead, they present themselves as somewhat weak and hampered so that whatever bone is thrown to a worker seems like a genuine favor. “We have no raise pool this year, but we really want to keep you so I fought for days and got you a 2-percent cost-of-living increase, which no one else is getting.” Excluding academia, Americans would rather have excellent positions in mediocre companies than crappy, subordinate positions in excellent companies. In a opaque regime where workers rarely know what others are getting in terms of compensation, career development, and project allocation, the self-effacing company can mislead a large number of workers into believing, each, that they are favorites: what you’re getting is meager, but everyone else is getting less.

The false scarcity has one toxic side effect. Sociopaths recognize it as a negotiation tactic and leave it at that, but the Clueless actually buy into it and “volunteer” to enforce its directives. So they end up shutting down self-executivity, as the desperation mentality evolves into cargo cult of “urgency” enforced by idiotic middle managers. For an analogue; in the 1990s, there was a fad where unskilled programmers would direct compilers to inline aggressively because “it makes the code go faster”. That’s not always true, and it’s not the whole story. Some code runs faster when the compiler is told to inline heavily but, in general, people are not smarter about such details than the compiler writers, and abuse of inlining makes the generated code substantially worse. The word urgent is the manager’s variety of “inline-all”. “It makes people work faster.”

False scarcity is a present-term negotiation tactic with deleterious long-term effects. Let’s take the Socratic method to an employee asking his manager for a raise. Here’s a conversation from Anywhere, U.S.A. between manager (Kim) and employee (Larry):

Larry: …so based on the market for Widgeteers in this region, I believe I should be making $85,000.

Kim: I can’t give you a raise, Larry. You’re already at the maximum salary for Widgeteer III’s.

Larry: I know that, but I’ve been doing the work of a Widgeteer IV for almost two years. Even you’d agree that I do more work than John did, and he was a Senior Staff Widgeteer when he retired.

Kim: You’re welcome to apply for promotion in February, but I won’t be able to support you. You need three consecutive 3′s on your performance reviews and I can only give you a 2 this year. Meets Expectations.

Larry: Explain to me why I’m a Two. I was a Three last year and I’ve only improved.

Kim: Last year, I had 24 review points to give out for the Proton team, but this year Jake got three more review points for his team because he’s fucking Janice, so I get three points less. I only have 21. I have to give Alex a 3 or he’ll mope and get nothing done for a year– you know how he is– but with only 21 points, if I give more than two 3′s, I have to give someone a 1. The damn paperwork that comes with a 1, well… you just wouldn’t believe it.

Larry: Would you be able to move me to the Neutron team? I’m sure you have plenty of points for that project, given the launch.

Kim: The Neutron team does not have enough headcount to accept Twos. We don’t want people who meet just Meet Expectations.

Larry: But you just admitted that I deserve to be a Three!

Kim: Go take your Meets-Expectations ass somewhere else, Twoser. I knew you were a Two the day I met you.

Larry: Well, wait. What is it that the Neutron team seeks? Maybe I could learn the skills now, and when a slot opens up, I could make the transition smoother.

Kim: We have deadlines. There’s no way that I can allow you to learn on company time. We just don’t have the slack. You need to put your head down and keep working on Proton.

Larry: So I can’t move to Neutron because I don’t have the skills, and you won’t let me take the time to learn them, even though it’s a project of much higher business value?

Kim: That’s right.

Larry: What if I spend a day per week learning Neutron, and come in on Saturdays?

Kim: I wouldn’t normally ask you to work on Saturdays, but if you’re going to come in on weekends, I ask that you work only on your assigned project. I can’t have you getting distracted. If I suspect that you are using Saturdays to pursue side projects and you are doing it on company resources, I will have to write you up for insubordination.

Larry: What if I work on my assigned project on Saturdays, and spend Fridays with the Neutron team?

Kim: Larry, I am not in on Saturdays and I am not going to come in on the weekend just to make sure you get your work done.

Larry: But you know that I get my work done!

Kim: The performance review I am writing says ‘Meets Expectations’. One point lower and you’d be a ’1′ and I’d have to write a Performance Improvement Plan. This would require me to write a negative summary of your performance, with dates and events to create the perception of legalistic precision when really it’d be all bullshit and we’d both know it. You are not a One, but you are clearly a Two, an expectations-meeter, because I have no more review points to give you. That means that I am disallowed from knowing that you get your work done.

Larry: What if I apply to the work-from-home program, but still come into the office five days a week, so that I can work Saturdays remotely?

Kim: That is an option, but your file says you live in zip code Q6502. You would be in a Category IV location for cost-of-living, and I would have to dock your pay by $6,500. So that is not going to get you your raise.

Larry: So what happens if I apply for transfer to a team that has more room for growth?

Kim: I will send you links to appropriate resources and make introductions to other teams’ managers for you, but then I will put negative commentary about your performance on your personnel file that you won’t be able to see. No one will want you, for the simple reason that I have credibility and you don’t. You won’t know what I’m saying and will have no way of appealing it. In this way, I am like a feeder who makes his captive unhealthy, sexually repulsive to other men, and preferably immobile so as to have complete dominance over her because no one will want her once she is morbidly obese. The difference between you and a bedridden feedee force-fed 18,000 calories per day is that you won’t know that it’s happening, and it will be entirely out of your consent. Then, I will allow your position to be cut from my team in a trade that gets me a $5,000 personal bonus and 2 more review points so I can get Bob promoted and not have to deal with his damn high-pitched voice anymore. You will have three weeks to endure transfer interviews in which you will have no chance, because I’ve already smeared your performance review history, at the end of which you will be fired not by me, but by a person you’ve never met.

Larry: Well, that might not be so bad. Is there a severance package?

Kim: We don’t like severance packages because it means we are rewarding failure. Besides, there’s no money in the budget[pauses, lowers voice] However, my promotion packet is coming down to decimal points, so those “360-degree” reviews of bosses that usually don’t matter? This might be the one time in a hundred where director-level people give a shit what people like you have to say. If you let me write your review, we can work out a story that gets you $20,000. How’s that sound?

Larry: Make it forty thousand.

Kim: 27-five. A pleasure!

This might seem like an attempt at anti-corporate humor. It’s not. Conversations like this actually go down all over the place in Corporate America. I’ve probably seen every perversion in this (except for the word “Twoser”) at least once.

Corporations have a problem with abuse of process, but there’s something else that pervasive scarcity allows. Abusive process. It comes down the snake and the grass. The snake is seen as vicious and malignant. The grass is viewed as being compliant and beneficent. But what covers the snake, so it can strike? The grass. The corporate grass has a good-cop, bad-cop flavor. There are abusive policies that exist (justified by false scarcity constraints and an overzealous need for bureaucratic consistency) that are so severe that nothing can get done unless exceptions are made, but exceptions are made so often that people view them as harmless, like a too-low speed limit in a place where no one is ticketed unless actually at an unsafe speed. The rules on paper are the ugly, barren ground that would be exposed without plant cover. Then, there are the “nice guy” makers of exceptions who enable people to actually get stuff done. They’re the grass. The snakes are the ones who have the power to turn off the making of exceptions in order to bring down a rival. 

The worst scarcity companies generate is the scarcity of work. The “problem” with self-executive employees is that they tend to generate projects that management never imagined. They create work for themselves. Good work. Work of a much higher quality than is typically seen in something assigned by management, because they’re self-motivated. This is good for them and their employers, but their managers view it as a negative. They might outshine the master.

This brings us to the so-called “lump of labor” fallacy. How fallacious is it? Is the demand for labor fixed and limited, or can it can grow as people and society progress? On one hand, there will always be limitless demand for making peoples’ lives better. On the other, structured work environments generate a pernicious and visible work scarcity.

If demand for work is truly finite, you get a competitive society where the fight to “get” work is more defining than the actual doing of work. If it’s limitless, you get structural cooperation as people work to make each other (and themselves) more productive. Most economists consider the “lump of labor”– that there’s a finite amount of work to go around, leaving us in zero-sum competition for it– to be an erroneous and regressive mentality. In the abstract, they’re right. Adding value, improving processes, and making peoples’ lives better should always have limitless demand.

However, within the typical corporation, the lump-of-labor mentality is pervasive and almost a permanent fixture. You need the attention of a manager (a professional “no man”) to get a project sanctioned. “Plum projects”– the rare case of desirable work that has high-level sanction– are handed out as political favors. High-impact work is directed only to the managerially blessed; most people don’t get any of that and are loaded up with fourth-quadrant evaluative nonsense with no purpose other than the living out of a painful dues-paying period. Sure, in the real world that exists outside of this corporate bullshit, there’s limitless need for people to make life better, often by implementing ideas that no executive would ever think of. However, under the corporate regime, there is a fixed (and small) amount of sanctioned work that it will accept as sufficient justification for retaining an employee. This means that the lump-of-labor slugfest– a race to the bottom among MacLeod Losers and Clueless as even the fucking process of getting to do real work becomes competitive– is very much in force over corporate denizens.

As corporations begin to believe their own false-scarcity myths (perpetrated by Sociopathic robbers at the top, and implemented by Clueless useful idiots in middle management) they start to fall under the delusion that they’ll fail outright unless all work is directed toward “sanctioned projects” as defined by a small, powerful set of people. Executives are often too far out of touch to have any clue what projects deserve sanction, and middle managers are both distracted by their own career needs and hobbled by their own tendency toward Cluelessness. Thus, they tend to generate a “sanctioned project” pool that is not only small but also increasingly divorced, as time goes on, from the company’s actual business needs.

It’s the increasingly myopic scope of “sanctioned projects”, and the morally degenerate competitive infrastructure (closed allocation) that builds up around them, that makes most corporate workplaces so horrible. But what’s the alternative? Can workers really just be allowed to define their own work? Well, it works for Valve and Github, two of the most successful companies out there. With self-executive work being worth 3 to 10++ times as much as traditionally managed work, it’s an unambiguous win for a company that can afford the risk and, with computational machinery now extremely cheap, that essentially only requires trusting them with their own time.

So why is this so rare? We have to go into a deeper Circle of Hell for that…

Fifth Circle: Trust sparsity

I focused heavily on trust in parts 17 (financial trust and transparency), 18 (industrial trust and time management), 19 (living in truth vs. convex dishonesty), and 20 (simple trust vs. Bozo Bit) because it’s increasingly clear how much damage is done to organizations by the lack of it. When you see large companies “acq-hiring” mediocre engineers at $10 million per head, it becomes clear that firms are desperate.

What are they desperate for? These “acq-hiring” firms have plenty of engineering talent in-house, but they get to a point where the prevailing assumption is that all of their own people are incompetent, lazy, and ineffectual, so they staff important projects with external jackasses bought in at a panic price. This behavior is a lot like impulsive hoarding, where a person’s living quarters become so messy that he has to buy a new winter coat every November not because he wears out the old one, but because his house is too much of a mess for him to find it again. The one difference is that, for the metaphor to apply fully to acq-hires, he’d have to be spending $15,000 on a $200 coat.

Why do companies hire so many people but trust so few of them? I examined this in Part 20, but the gist of it is that, while the larger concept of trust has degrees and variations, simple trust (whether a person is treated as competent and worthy of respect) tends to be binary (“bozo bit”) and it is usually a global systemic property of a group of people. It is either trust-dense, meaning people are generally held to be independently credible, or trust-sparse, which tends to generate a dysfunctional array of warring cliques. In a trust-sparse environment, being without a clique leads to isolation, exclusion, and failure, so the pressure to become part of one generates a feudalistic pattern of behavior.

Because simple trust is a binary and systemic property, one person “flipping the switch” can turn off the lights for good. It just doesn’t take much. Trust density, although the core of any healthy business, seems to be fragile. What makes it this way? I’ve concluded that there’s a cardinal rule that organizations, unless they want hell to break loose, must follow: don’t hire before you trust.

Most rules in business have exceptions. Not this one. Only hire people with the intention of investing simple trust– trust to do the right thing with their own time– in them. If you hire someone who proves unworthy of simple trust– it’s uncommon, but it happens– than fire him. Don’t be a dick about it– write a generous severance package– but get him out as quickly as you can. Also, don’t keep an unethical high performer around just because he’s “hard to replace”. You need a company where everyone can be afforded simple trust and, if you lose that, it’s almost impossible to get it back.

Plenty of companies hire people with full intention never to make them real members of the company. They’re brought in for grunt work, because it seems less risky to hire a schmuck off the street (the hiring can be undone) than to automate the undesirable work (a project that might fail). Bad move. This addiction to cheap labor accelerates itself because a trust-sparse company can never find and trust capable people who’ll automate the crappy work, which is what should be done. Soon enough, the company is one where new hires spawn with the Bozo Bit in the “on” position, which creates resentment between the old and new hires. No one likes being seen as a “bozo” and it turns out that the most reliable ways to turn off one’s bozo bit are generally considered unethical (convex dishonesty). The corrosion is pretty much immediate.

Why in the hell would a company sell off its culture, and hire people it distrusts? I’m actually going to sample from evolutionary biology and invoke r- and K-selection. An r-strategic species aims for rampant proliferation but low quality of individual offspring. A hundred may be born, but only a few will survive. On the other hand, a K-strategist aims to have few, highly successful, offspring. In humans, women tend toward K-selection (because of the natural reproductive bottleneck) while men can be r- or K-strategists. However, r-strategic behavior in men leads to positional violence, maltreatment of women, and population catastrophes. Civilization began when humans discovered monogamy and, instead of successful men having tens of “wives” (sexual slaves in a harem) and hundreds of children, with no paternal investment; they were encouraged to have few wives (often, only one) and a smaller number of children, in whom they invested highly. In other words, civilization began when men were forced to be mostly monogamous K-strategists, ending the extreme frequency (death rates of 0.5 to 1 percent per year) of male positional violence and enabling stability.

If we view business as a reproduction– of work processes and values, knowledge and relationships– then we find that there are also r- and K-selective business strategies. K-strategist “parents” (bosses) want to have few “children” and invest in them highly, treating them as proteges or advisees. More common are the r-strategic corporations that hire a bunch of people, invest nothing in their careers, and expect only a few to thrive. For concave work, the r-strategic approach was probably the most profitable one, since adding more heads meant pulling in more dollars. But the 21st century is showing us that, for convex work, a K-strategic approach to business expansion is the only way to go.

It is bad that these r-selective companies hire before they trust, and it is also dishonest. When recruiting, companies engage people by telling them a story of what kind of work they’d be doing as trusted real members of the team while failing to state that most new hires will end up in the untrusted, bozo category for arbitrary reasons. It’s when that happens that companies start to evolve credibility black markets, and the panicked trading that transfers power to ethical degenerates sets in. To understand the process behind this, we have to descend yet again, into…

Sixth Circle: Passive aggression 

Tough cultures believe that within-company competition is beneficial and makes people work harder and produce more. They’re wrong. Rank cultures tend toward “harem queen” dynamics as people jockey for managerial favor. That’s bad as well. Dysfunctional companies, and that’s most of them, tend to be marked by passive-aggressive behavior and social competition. The contest for the artificially scarce resources (good projects, managerial attention) that success depends upon, and the fear of ending up in the bozo category, generate patterns of behavior so negative that they ruin a company outright.

This style of degeneracy is hard for managers to detect because, when workers are engaged in it, they appear affable and dedicated. They race against each other to work the longest hours, and take on responsibility to gain power over critical nodes of the company. This is also why it’s critically important to fire unethical high performers, no matter how “indispensable” they seem. Unethical people, unless they are completely devoid (like, bottom 1%) of social skills, will always seem like high performers, and they will always appear indispensable. They develop the skills of shifting blame and taking credit so rapidly that by their early 20s they have more experience in manipulating people (yes, that includes you) than most people have in a lifetime. They always seem too important to get rid of. Don’t fall for that. You can afford to fire an unethical high performer, especially because he’s probably not a legitimate high-achiever; you can’t afford not to.

Social competition is what the truly toxic use in order to get their way. They isolate targets and rivals, and they often take advantage of the false scarcity in work allocation to make sure that the best people get the worst work, driving them out. Clueless middle managers, who take complaints from ladyboy favorites at face value, are typically oblivious to the demoralizing backstabbing that goes on in front of them. They’re just bad judges of character. It always gets me when managers say they infallibly shut down anyone who tries to “play politics” under them; if anyone is visibly playing politics, he is clearly unsuccessful at it, and perhaps he took the blame for someone else’s political plays. Sociopaths, on the other hand, don’t care too much about the character of people working for them either way, so long as those aren’t a personal threat to their goals. Clueless don’t know about all the nasty politics that exists below them; Sociopaths can see it but don’t care.

On the whole, however, people tend to agree that ethical character is important; even Sociopaths don’t want to deal with those who will rob them. Character is far more important than talent. The problem is that it’s very hard to judge a person’s ethical mettle. How does one know what a person would do in extreme circumstances, when such conditions are so rare? That’s where human social dynamics come into play. People assume, often wrongly, that the little betrays the big. In a heterogeneous world, this fails in a major way. If someone pronounces words in a slightly different and characteristic way, that’s called an accent and it’s not a sign of stupidity. If someone can’t work 80 hours per week, that’s not a sign of poor ethical character but an artifact of typical human limits and of health problems that are irrelevant at the 40-hour level. Yet, human social organisms tend to believe, in spite of the ridiculousness of it, that social reliability (the little) betrays true character (the big). Thus, companies tend to attempt to measure ethical character through superficial reliability contests, and the amusing thing there is that, even though they consistently backfire by promoting the bad people who are most used to such contests, corporations (especially tough cultures, where reliability tests are the point) continue to use them.

Winners of reliability contests tend to be the worst people, because the artificial “crunch times”, deadlines, and scarcity push people to their limits and strain their social resources. Psychopaths are naturally adept at manipulating this in order to make sure others faceplant first, leaving them standing. Psychopaths don’t tire in social competition because it’s fun for them to watch everyone else burn out. Management, in general, is not capable of figuring out when this is happening. The psychopath presents himself as a high performer, and colleagues are too scared to tell the truth.

Most of these reliability contests, not by design but through ignorance, are built in such a way that the psychopaths are most adaptive to them. The social competition dynamics of a reality show (e.g. Survivor) and office politics are at least a million years old. Since psychopathy is most likely an individually fit (but socially harmful) r-strategy, it co-evolved with that nonsense. It turns out that “the bad guys” have been hacking our social reliability competitions for a thousand times longer than we’ve had language to describe any of these ideas. 

I’ll take a concrete example, which is the stigma associated with “job hopping”. Why is it there? Employers understand that the most dangerous people are the high-talent unethical ones. They’re right. And job hoppers tend to be high-talent “disloyal” people; at the least, they don’t give loyalty away for free. Unfortunately, since there’s no way to measure ethical character, the rage is taken out on people who have “too many jobs”. Well, through various consulting projects I’ve had access to more data on this matter than most of these HR idiots could see in twelve lifetimes, and I have the answer on that: unethical people don’t hop from job to job, continually subjecting themselves to social change and potential disadvantage. Instead, they most commonly ingratiate themselves with upper management early on, build deep trust over time (since that’s the only way to do it) and, when the opportunity emerges, betray everyone in one fell swoop. Knowing the power that comes with seniority, they’re more prone than the general population to long job tenures. Unethical people tend to have a Doppler Effect in which there’s one perception of them from ahead and above them (that they are affable, subordinate, dedicated) and there is another, much more accurate, view from those below and behind them whom they consider unworthy of impressing.

The best way to avoid taking on a large number of unethical people is not to attract them. It is, in general, impossible to detect them until they’ve done their damage, so the only strategy is passive defense: build a (K-strategist) company that won’t attract them. This ties into trust sparsity. Unethical people love trust-sparse environments, because those mean there is a Bozo-Bit switchboard to be found, played with, and used to gain power. That brings us to the chief accelerant, as well as byproduct, of trust sparsity. We come to the Seventh Circle. Headlong into the flames we go…

Seventh Circle: Extortion

I’ve said my piece about closed allocation being a form of extortion, because the conflict of interest between people and project management forces the employee to serve the manager’s career goals or face isolation, firing, and possible humiliation. “Work for me or you don’t work here.” It’s far from true that managers are the only people guilty of this behavior. Companies have been targeted by morally bankrupt programmers who built defects (“logic bombs” and back doors) into their systems. It’s expensive and humiliating to be extorted, and the emotional scarring can last for a long time.

That extortion leads to distrust should be so obvious that it doesn’t need explanation, so we’ll treat it as self-evident. Some of the “scar tissue” that companies develop is a direct result of previous extortions by employees, management, counterparties, and investors. Some more of it is cargo-cult transplant scar tissue that executives transcribe, without knowing why it exists, from one company to another as they move about. The tendency for companies to evolve toward a “Theory X” distrust of employees– sometimes without knowing why– comes from this replication of other companies’ post-traumatic policies.

Most of the other Circles of Hell tie into this Seventh. What is the benefit that it confers to a psychopath to have access to the “Bozo Bit” switchboard of a trust-sparse company? What, precisely, is most social competition? It is extortion. When a venture capitalist threatens to “pick up a phone” and turn off interest among nominal competitors if you don’t accept an abusive term sheet, what is that? What is the purpose of feudal reputation economies? Oh, right.

What is extortion? When does negotiation, which is unambiguously acceptable, go into black hat territory? I would say that extortion has a few defining characteristics:

  • There is an asymmetric power relationship, usually conferred by social access to people capable of extreme physical or social violence (esp. harm to reputation).
  • The extorter is attracted to the extortee by the latter’s participation in productive activity, in an attempt to draw a share of the profits through coercion. For this reason, the extortee’s success will only draw more extortion.
  • The harm is sometimes presented as a punishment, but is an extreme one and usually in retaliation toward something the extortee has the right to do. In other cases, it’s presented as an offer of “protection” (from oneself, or one’s hired thugs.)

Extortion is the epitome of parasitism. It adds nothing to the ecosystem. Rather, it feeds off the profile of the most productive players. Extortion is not the same as blackmail. They’re similar crimes, but there’s a fundamental difference in why each is illegal. Blackmail is illegal because it’s selective, corrupt law enforcement: even if the blackmailer has the right not to report the crime (this differs by jurisdiction) he does not have the right to selectively do so based on a personal payment. Extortion is illegal because it’s a drain against productive activities, as extortionists ratchet up their demands, often putting producers out of business. It is also exceedingly common if not made illegal, and hard to drive out even then.

Is management (in the classic corporate sense) extortion? I’ve made this claim before; can I defend it? Well, let me explain it in detail. Managers ought to have the right to terminate a relationship, just as employees do. In a small company, this would mean the end of employment. But in a large company, should managers have unilateral termination authority? Absolutely not. Do they? From a Clueless perspective, no. From a cynical (and accurate) perspective, they do. Companies rarely afford managers unilateral termination because it’s too much of a lawsuit risk; but they give the manager so much credibility (especially if performance reviews are Enron-style, meaning that they’re part of an employee’s transfer packet in internal mobility) that they can engage in “passive firing” (damage to employee’s reputation, often deliberate and invisible to the target, that makes him ineligible for internal mobility). Why do they allow this? For the sake of “project expediency”. Companies grant this power to managers out of the misguided belief that the trains simply won’t run on time unless bosses have that power. They can’t grant unilateral termination explicitly (lawsuit risk) so they create mean-spirited performance review systems and passive-firing infrastructures toward the same goal.

How is managerial authority most often used, both in rank and tough cultures? The subordinate employee is coerced into throwing all her weight into the manager’s career goals, with the scraps given to her own. What exactly is that? Again, it’s pure extortion. Companies permit it, because the manager has credibility.

The whole point of a credibility market is to allow extortion in the name of “project expediency”. Does it actually serve that purpose, and improve project success? No. The extreme success of open allocation proves that companies don’t need extortionist managers. While there probably is a need for some of what is called “management” in most companies– for training, direction and guidance only– there is no evidence, anywhere, that a healthy company benefits (except in short-term, existential emergencies that require “my way or the highway” leadership) from this sort of behavior.

One might notice that all of the six Circles above tie into managerial extortion.

  • Opacity gives power to management over employees’ long-term careers, since they have no clue what the actual economic landscape or market climate is, especially if they face an adverse manager (reference problem).
  • Parasitism is the obvious goal of the extortionist, but extortionists find the organization’s fear of parasitism (“low performer” witch hunts) to be an effective tool of aggression.
  • Career incoherency is a result of widespread extortionist managerial culture. It’s the manager’s right to say, “You work for me or for no one here” that forces people to do work of limited or no career value.
  • False scarcity is what encourages people who might object to the extortion, instead, to passively tolerate it. This is the “project expediency” argument; many companies believe (falsely) that nothing will be achieved, and the company will die, without extortionist thugs (using the threat of harsh credibility reductions) to police the bottom.
  • Trust sparsity is the philosophical underpinning of the extortion market. It creates the “Bozo Bit switchboard” which is the holy grail of a psychopath.
  • Passive aggression is enabled by a perverse and intentionally dysfunctional bureaucracy where people can cause harm through inaction. I know someone who was fired because his boss forgot to write a performance review, and the default rating assigned in the no-review case happened, that year, to fall under the 5% mandatory-fire cutoff. Whether this was a case of forgetfulness or passive aggression is beyond my knowledge, I honestly have no idea. But many companies operate on an original sin principle where employees are ruined (no credibility) unless protected by a manager. And what is mandatory “protection”?

It’s extortion that is at our enemy’s heart. That’s the core of corporate evil, at least on the internal front. It must seem that we’re at the bottom, but we don’t yet have full explanatory power over what motivates so goddamn much extortion. What makes people extortive? Is that truly “human nature”, or is it merely human behavior when people are subjected to humiliating false scarcity and nonsensical, dehumanizing processes? Let’s go into that, right now. It’s an ugly place, but we’ve been through plenty of those…

Eighth Circle: Powerlessness

Evil exists, and lawful evil is a defining force of intra-corporate social competition. However, maybe there’s room for compassion: sympathy for the damned. Why, we might permit ourselves to ask, is there so much bad behavior in the corporate context? Is it the stakes? That’s the Theory-X explanation. There’s a lot of money in it; ergo, people steal. I don’t buy that, because work isn’t the highest-stakes thing people do, and there are processes with more importance and less moral corruption. Theory Y’s explanation of bad behavior at work is that it tends to be self-accelerating; people are naturally inclined toward good action, but one bad behavior leads to several more, with the victims often unable to retaliate directly and, therefore, propagating the misery through the company. That’s quite true, but it doesn’t explain the first injection of evil. Where does that come from? Theory Z, being agnostic on the broader moral questions, takes the teamist approach of planting “fire brakes” between teams so that any submodule (person, team, department) that turns toxic can be sloughed off in isolation. (This is why internal transfer is so hard– a fact that extortionist managers love, obviously– in a Theory-Z organization.) Who’s right?

Theory Y is mostly right, but probably only 90% correct. There are first-strike extortionists and thugs out there. They exist. Bad people are real. An organization that can’t defend itself against them will perish. While passive defense (not attracting them) is best, companies do need to keep a watchful eye on this behavior. The problem with this is that extortionists get their first practice on the people the organization cares the least about, and are already well on a roll by the time the negative behavior becomes a visible problem.

Companies tend to get their moral policy utterly backward. They take a Theory-X attitude toward their people in general, imposing restrictions designed to guard the firm against extortive behavior and theft. However, it’s impossible to get anything done in such an environment. That’s why trust-sparsity generates convex dishonesty, even in heroes (“stone soup”) who are forgiven after the fact. The result of this is that trust-sparse companies must make exceptions, and they do this based on million-year-old social protocols originally designed to encourage K-strategists to deny sexual favor to unworthy and unsavory r-strategists. There’s nothing wrong with that. As a product of human evolution, I’m glad that the K-selective machinery exists. But the psychopaths have a million-year track record of hacking it and getting in. They do the exact same thing to trust-sparse companies. Firms need to defend themselves with something else.

Total denial doesn’t work, because companies can’t operate if they never trust anyone; and conditional denial leads to the victory of psychopaths who’ve spent a million years making themselves exceptions to other peoples’ (well-advised) rules.

What we need is to go the other way, to full-on trust density, and release all non-extortive power to employees. The company must grant autonomy and freedom to serve the business goals to everyone, except to those who attempt to steal or extort, which it must terminate immediately. It turns out that this is a stronger way of policing ethical behavior, because the people on the ground (a) actually care about organizational health, and (b) aren’t afraid to speak up when they see problems.

What’s the alternative? What dominates corporate life for most people? Powerlessness. If trust-sparsity is allowed, then anyone can become an untrusted member of the group, and almost everyone is exposed and afraid. If that’s the case, then all but a few people are in a disempowered and humiliating position. Lord Acton told us that power (of the extortive kind) tends to corrupt, and he’s right. Powerlessness, however, also corrupts.

Power makes bad people evil, and it makes weak people (and that’s a large pool, sadly) bad. Powerlessness, on the other hand, makes good people ineffectual, bad people similarly evil, and weak people both bad and strong. How do I mean that? What could possibly occur through powerlessness that makes the weak strong? Individually, they’re defeated, but they become cheap votes (see above) for the true bad guys to corral and deploy. Since the statistical voting power of a bloc is proportional to the square of its size, and blocs of the weak become substantially more cohesive amid powerlessness and fear, they become strong when under direction from evil. This is why Clueless middle-managers, although few are innately unethical, can easily be misled toward criminal activity.

What is the end game of the corporation when people are powerless? Well, it generates the MacLeod pyramid, and also accelerates its degeneracy. Make people more powerless, and:

  • Losers will become increasingly apathetic, content to draw a salary for no contribution which is (despite some economists’ claims to the contrary) not a comfortable arrangement for most. The Socially Accepted Median Effort (SAME) will drift toward zero. Even managers will tolerate non-contribution as it becomes clear that no one is able to get real work done, anyway.
  • Clueless develop an awareness of low performance (theirs and others) and their overactive superego drives them toward overcompensation. This adds back-and-forth “Brownian management” to the mix. It only adds noise because, while these Clueless are eager, they lack coherent strategic direction.
  • Sociopaths rebel and sabotage operations if they are personally rendered powerless, but they’re also (unlike most) prone toward assessing power in relative terms, so a Sociopath who is macroscopically powerless might still be happy to improve his personal power base by trading on the credibility market. Sociopaths can tolerate macroscopic powerlessness if they can still double their micro-scale power at a sufficient rate. 

This seems to be the end state of dysfunction: powerlessness. Companies fear extortion by employees– as they, perhaps, should, because it happens– but they go so far as to disempower those inclined toward good-faith experimentation and creativity. Then, people lose all reason to care about the health of the organization. Why would anyone care about a company that views her as a bozo? She shouldn’t. She should take what she can and get out once something better is avaiable.

We come to the bottom of the Eighth Circle and find a black hole in the center of the floor. We look into it, and see no bottom. It’s just a chasm. To most people, it would be terrifying. Few of the denizens of the other circles, as miserable as they may be while they are, would dare to enter it, but we’ve come this far. We need to complete our journey, so let’s get on with it. Let’s jump into the hole and find the bottom…

Ninth Circle: ??????????

We start to fall, and in the thin air down here, we continue to accelerate for a long time. It’s not painful, although the air is somewhat hot and it is very loud to be falling this fast. As we get to about a thousand miles an hour, we realize we have time for somewhat of a side conversation. While we drop, it looks like we have time for an aside about religion, and then about economics.

Yes, I said religion. Don’t worry. It will make sense when we get to the bottom, but while we’re careening toward the center of the earth, let’s talk about it.

Westerners (especially detractors of religion) tend to believe that religion exists to allay fears about death, even though religious belief predates faith in any desirable afterlife. (Babylonians, for example, believed in a repulsive afterlife state.) Actually, religion exists to guide people through their fears about life. Religion has had enough on its hands in helping people understand this world. At any rate, I’m most simpatico with Buddhism, so I’m going to discuss the first of the Four Noble Truths. Commonly interpreted as “all life is suffering” (dukkha) it is better interpreted as meaning, “there is suffering in all (samsaric) life”. Rich and poor, animals and humans, and even the traditional gods and demigods (if they exist) of other religions are in a world where dukkha is possible. Shakespeare’s Hamlet referred to “the thousand natural shocks that flesh is heir to”, and he seems to have been discussing the same thing. There’s an overarching theme: life is difficult and there is pain in it. The ancient Greeks blamed a woman named Pandora; Jewish and Christian mythologies involved a snake in the Garden of Eden. Both myths implicated curiosity (philosophically, this could be related to the idea, through rarely formalized this way, that evil exists because we want to know what it is) while the Eastern approaches focus on attachment.

The Western Christian arc evolved a doctrine called original sin, even though it’s not strictly Biblical. I don’t find it to be a useful concept at all. It has been used to justify horrible actions in the past such as the cultural (and sometimes physical) genocide of non-Christian people and, in my mind, it serves no value. It takes the view that we deserve to be punished and are (without supernatural intervention) of negative worth. It’s anti-humanist. I take a more Eastern approach to human nature: there is no such (inflexible) thing. Therefore, there can be no “original sin”. There are ignorance, karma, suffering, and a myriad of biological impulses we have to sort out, but total depravity is nonsensical. It just might be, in fact, one of the worst concepts ever.

Original sin found its way into the “Calvinist work ethic” that outlived actual Calvinism, and has since become a fully secular doctrine. According to the original-sin economics, people are devoid of human value unless made productive (“saved”) by a large, successful, and rich institution. Individual people have no credibility, and being on the job market is taken to be so humiliating that a person should fear being there (even though, since companies can fire at-will, everyone is always on the job market). Companies, however, do have credibility. Prestige. Reputation. It’s all the same thing. They can save. Independent individuals, however, are seen as depraved and damnable, especially if they’ve been unemployed for “too long”.

It’s bad enough that we have to deal with this original-sin nonsense in the greater society, which is one reason why the United States will always be thirty years behind Europe when it comes to healthcare and a modern insurance (they are not as ashamed to use the word “welfare” here) state. That’s bad enough; it really sucks, actually. However, we even have it within companies. You are not credible unless a member of the (mostly corrupt) priesthood called “management” speaks for you. If you seek another job within the company, your performance reviews are part of the transfer packet, even though that’s almost illegal. (It is not, technically speaking, illegal for companies to include HR history in internal mobility decisions, but it is illegal for managers to interfere malevolently with work performance, which includes internal mobility if such options exist. Therefore, a negative review that is visible in the transfer process is, in fact, already in violation of the law.)

What does this original-sin mentality buy us? It creates an economy where almost everyone is in danger of falling to the bottom, or being excommunicated, and this fear creates an economy of extortion so vast and toxic as to be self-perpetuating. That’s what we’ve got, thanks to our original sin mentality.

The evil isn’t capitalism or communism. It’s much older than that. It’s the belief that most humans are devoid of any value, and require salvation through some process that is actually the whim of a corrupt clerical hierarchy.

I haven’t solved the Organization Problem, and at 12 kilowords already I’ll have to put that in Part 23, but I’ve found it…

We land on the bottom. This is it. The Ninth Circle. Here we are. I light a torch and there’s… nothing here. It’s just a cave. Have we passed through the center of the earth and into the antipodes? It is a comfortable temperature here. It does not look like Hell.

Is this a ruse? No, it doesn’t seem to be. There seems to be nothing Hellish about this spot. Maybe that’s the point.

In fact, I might remind you that you were never in a cave (much less Hell) at all. You’re reading text on a computer screen! If you pictured a descent into Hell, that was your doing (but I hope it didn’t disturb you too much) and no one else’s, because that hell didn’t exist.

So let’s look at this Ninth Circle of hell: there is no there there. So it is, also, with the Corporate Hell. Yes, there are extortions and people are powerless, and there is a common fear of loss of income. This is all nasty stuff. There’s plenty of ugly behavior. The beast is cruel and yet… where is it? Who is it? It’s vapor! It exists because it lives in minds, and because we care– too much, perhaps– about it. Quite possibly, it lives in your mind. It has certainly raged on in my mind. That’s how I know what it is. Now I wish to kill it, on a global scale. I want these extortionists driven back into the shadows from which they came. I have no idea how long or how much work it will take to succeed, but that’s no excuse not to try.

Given the non-substance of our enemy, and the fantastical nature of the Hell that it has created for us, maybe we can. Maybe we can end the cycle of extortions and powerlessness for good. We know that the Emperor has no clothes. We’ve known it for decades. Maybe we, as a generation, can summon the courage to laugh at his tiny balls.

It might not be so clear, and it might take another essay (Part 23, coming up) to show this, but each of these levels taught us something about the organization, and all of them contain subproblems that must be solved. The structure of the solution will mirror, somewhat, the “Inferno” shape of the problem, which has been given here. So that’s what comes next, in part 23. Stay tuned. It won’t be long.

Gervais / MacLeod 21: Why Does Work Suck?

This is a penultimate “breather” post, insofar as it doesn’t present much new material, but summarizes much of what’s in the previous 20 essays. It’s now time to tie everything together and Solve It. This series has reached enough bulk that such an endeavor requires two posts: one to tie it all together (Part 21) and one to discuss solutions (Part 22). Let me try to put the highlights from everything I’ve covered into a coherent whole. That may prove hard to do; I might not succeed. But I will try.

This will be long and repeat a lot of previous material. There are two reasons for that. First, I intend this essay to be a summarization of some highlights from where we’ve been. Second, I want it to stand alone as a “survey course” of the previous 20 essays, so that people can understand the highlights (and, thus, understand what I propose in the conclusion) even if they haven’t read all the prior material.

If I were to restart this series of posts (for which I did not intend it, originally, to reach 22 essays and 92+ kilowords) I would rename it Why Does Work Suck? In fact, if I turn this stuff into a book, that’s probably what I’ll name it. I never allowed myself to answer, “because it’s work, duh.” We’re biologically programmed to enjoy working. In fact, most of the things people do in their free time (growing produce, unpaid writing, open-source programming) involve more actual work than their paid jobs. Work is a human need.

How Does Work Suck?

There are a few problems with Work that make it almost unbearable, driving it into such a negative state that people only do it for the lack of other options.

  • Work Sucks because it is inefficient. This is what makes investors and bosses angry. Getting returns on capital either requires managing it, which is time-consuming, or hiring a manager, which means one has to put a lot of trust in this person. Work is also inefficient for average employees (MacLeod Losers) which is why wages age so low.
  • Work Sucks because bad people end up in charge. Whether most of them are legitimately morally bad is open to debate, but they’re certainly a ruthless and improperly balanced set of people (MacLeod Sociopath) who can be trusted to enforce corporate statism. Over time, this produces a leadership caste that is great at maintaining power internally but incapable of driving the company to external success.
  • Work Sucks because of a lack of trust. That’s true on all sides. People are spending 8+ hours per day on high-stakes social gambling while surrounded by people they distrust, and who distrust them back.
  • Work Sucks because so much of what’s to be done in unrewarding and pointless. People are glad to do work that’s interesting to them or advances their knowledge, or work that’s essential to the business because of career benefits, but there’s a lot of Fourth Quadrant work for which neither applies. This nonsensical junk work is generated by strategically blind (MacLeod Clueless) middle managers and executed by rationally disengaged peons (MacLeod Losers) who find it easier to subordinate than to question the obviously bad planning and direction.

All of these, in truth, are the same problem. The lack of trust creates the inefficiencies that require moral flexibility (convex deception) for a person to overcome. In a trust-sparse environment, the people who gain people are the least deserving of trust: the most successful liars. It’s also the lack of trust that generates the unrewarding work. Employees are subjected, in most companies, to a years-long dues-paying period which is mostly evaluative– to see how each handles unpleasant make-work and pick out the “team players”. The “job” exists to give the employer an out-of-the-money call option on legitimately important work, should it need some done. It’s a devastatingly bad system, so why does it hold up? Because, for two hundred years, it actually worked quite well. Explaining that requires delving into mathematics, so here we go.

Love the Logistic

The most important concept here is the S-shaped logistic function, which looks like this (courtesy of Wolfram Alpha):

The general form of such a function L(x; A, B, C) is:

where A represents the upper asymptote (“maximum potential”), B represents the rapidity of the change, and C is a horizontal offset (“difficulty”) representing the x-coordinate of the inflection point. The graph above is for L(x; 1, 1, 0).

Logistic functions are how economists generally model input-output relationships, such as the relationship between wages and productivity. They’re surprisingly useful because they can capture a wide variety of mathematical phenomena, such as:

  • Linear relationships; as B -> 0, the relationship becomes locally linear around the inflection point, (C, A/2).
  • Discrete 0/1 relationships: as B -> infinity, the function approaches a “step function” whose value is A for x > C and 0 for x < C.
  • Exponential (accelerating) growth: If B > 0, L(x; A, B, C) is very close to being exponential at the far left (x << C). (Convexity.)
  • Saturation: If B > 0, L(x; A, B, C) is approaching A with exponential decay at the far right (x >> C). (Concavity.)

Let’s keep inputs abstract but assume that we’re interested in some combination of skill, talent, effort, morale and knowledge called x with mean 0 and “typical values” between -1.0 and 1.0, meaning that we’re not especially interested in x = 10 because we don’t know how to get there. If C is large (e.g. C = 6) then we have an exponential function for all the values we care about: convexity over the entire window. Likewise, leftward C values (e.g. C = -6) give us concavity over the whole window.

Industrial work, over the past 200 years, has tended toward commoditization, meaning that (a) a yes/no quality standard exists, increasing B, and (b) it’s relatively easy for most properly set-up producers to meet it most of the time (with occasional error). The result is a curve that looks like this one, L(x; 10, 4.5, -0.7), which I’ll call a(x):

Variation, here, is mainly in incompetence. Another way to look at it is in terms of error rate. The excellent workers make almost no errors, the average ones achieve 95.8% of what is possible (or a 4.2% error rate) with the mediocre (x = -0.5) making almost 5 times as many mistakes (28.9% error rate), and the abysmal unemployable with an error rate well over 50%. This is what employment has looked like for the past two hundred years. Why? Because an industrial process is better modeled as a complex network of these functions, with outputs from one being inputs to another. The relationship of individual wage into morale, morale into performance, performance into productivity, and individual productivity into firm productivity, and firm productivity into profitability, can all be modeled as S-shaped curves. With this convoluted network of “hidden nodes” that exists in a context of a sophisticated industrial operation, it’s generally held to be better to have a consistently high-performing (B high, C negative) node than higher-performing but variable node.

One way to understand the B in the above equation is that it represents how reliably the same result is achieved, noting the convergence to a step function as B goes to infinity. In this light, we can understand mechanization. Middle grades of work rarely exist with machines. In the ideal, they either execute perfectly, or fail perfectly (and visibly, so one can repair them). Further refinements to this process are seen in the changeover from purely mechanical systems to electronic ones. It’s not always this way, even with software. There are nondeterministic computer behaviors that can produce intermittent bugs, but they’re rare and far from the ideal.

As I’ve discussed, if we can define perfect performance (i.e. we know what A, the error-free yield, looks like) then we can program a machine to achieve it. Concave work is being handed over to machines, with the convex tasks remaining available. With convexity, it’s rare that one knows what A and B are. On explored values, the graph just looks like this one, for L(x; 200, 2.0, 1.5), which I’ll call b(x):

It shows no signs of leveling off and, for all intents and purposes, it’s exponential. This is usually observed for creative work where a few major players (the “stars”) get outsized rewards in comparison to the average people.

Convexity Isn’t Fair

Let’s say that you have two employees, one of whom (Alice) is slightly above average (x = 0.1) and the other of whom (Bob) is just average (x = 0.0). You have the resources to provide 1.0 full point of training, and you can split it anyway you choose (e.g. 0.35 points for Alice, and 0.65 points for Bob). Now, let’s say that you’re managing concave work modeled by the function L(x; 100, 2.0, -0.3), which is concave.

Let the x-axis represent the amount of training (0.0 to 1.0) given to Alice, with the remainder given to Bob. Here’s a graph of their individual productivity levels, with Alice in blue, Bob in purple, and their sum productivity in the green curve

If we zoom in to look at the sum curve, we see a maximum at x = 0.45, an interior solution where both get some training.

At x = 0.0 (full investment in Bob) Alice is producing 69.0 points and Bob’s producing 93.1, for a total of 162.1.

At x = 0.5 (even split of training) Alice in producing 85.8 points and Bob’s producing 83.2, for a total of 169.0.

At x = 1.0 (full investment in Alice) Alice is producing 94.3 points and Bob’s producing 64.6, for a total of 158.9.

The maximal point is x = 0.45, which means that Alice gets slightly less training because Bob is further behind and needs it more. Both end up producing 84.55 points, for a total of 169.1. After the training is disbursed, they’re at the same level of competence (0.55). This is a “share the wealth“ interior optimum that justifies sharing the training.

Let’s change to a convex world, with the function L(x; 320, 2.0, 1.1). Then, for the same problem, we get this graph (blue representing Alice’s productivity, purple representing Bob’s, and the green curve representing the sum):

Zooming in on the graph sum productivity, we find that the “fair” solution (x = 0.45) is the worst!

At x = 0.0 (full investment in Bob) Alice is producing 38.1 points and Bob’s producing 144.1, for a total of 182.2.

At x = 0.5 (even split of training) Alice in producing 86.1 points and Bob’s producing 74.1, for a total of 160.2.

At x = 1.0 (full investment in Alice) Alice is producing 160.0 points and Bob’s producing 31.9, for a total of 191.9.

The maxima are at the edges. The best strategy is to give Alice all of the training, but giving all to Bob is better than splitting it evenly, which is about the worst of the options. This is a “starve the poor” optimum. It favors picking a winner and putting all the investment into one party. This is how celebrity economies work. Slight differences in ability lead to massive differences in investment and, ultimately, create a permanent class of winners. Here, choosing a winner is often more important than getting “the right one” with the most potential.

Convexity pertains to decisions that don’t admit interior maxima, or for which such solutions don’t exist or make sense. For example, choosing a business model for a new company is convex, because putting resources into multiple models would result in mediocre performance in all of them, thus failure. The rarity of “co-CEOs” seems to indicate that choosing a leader is also a convex matter.

Convexity is hard to manage

In optimization, convex problems tend to be the easier ones, so the nomenclature here might be strange. In fact, this variety of convexity is the exact opposite of convexity in labor. Optimization problems are usually framed in terms of minimization of some undesirable quantity like cost, financial risk, statistical error, or defect rate. Zero is the (usually unattainable) perfect state. In business, that would correspond to the assumption that an industrial apparatus has an idealized business model and process, with the management’s goal to drive execution error to zero.

What makes convex minimization methods easier is that, even in a high-dimensional landscape, one can converge to the optimal point (global minimum) by starting from anywhere and iteratively stepping in the direction recommended by local features (usually, first and second derivative). It’s like finding the bottom point in a bowl. Non-convex optimizations are a lot harder because (a) there can be multiple local optima, which means that starting points matter, and (b) the local optima might be at the edges, which has its own undesirable properties (including, with people, unfairness). The amount of work required to find the best solutions is exponential in the number of dimensions. That’s why, for example, computers can’t algorithmically find the best business model for a “startup generator”. Even if it were a well-formed problem, the dimensionality would be high and the search problem intractable (probably).

Convex labor is analogous to non-convex optimization problems while management of concave labor is analogous to convex optimization. Sorry if this is confusing. There’s an important semantic difference to highlight here, though. With concave labor, there is some definition of perfect completion so that error (departure from that) can be defined and minimized with a known lower bound: 0. With convex labor, no one knows what the maximum value is, because the territory is unexplored and the “leveling off” of the logistic curve hasn’t been found yet. It’s natural, then, to frame that as a maximization problem without a known bound. With convex labor, you don’t know what the “zero-or-max” point is because no one knows how well one can perform.

Concave labor is the easy, nice case from a managerial perspective. While management doesn’t literally implement gradient descent, it tends to be able to self-correct when individual labor is concave (i.e. the optimization problem is convex). If Alice starts to pull ahead while Bob struggles, management will offer more training to Bob.

However, in the convex world, initial conditions matter. Consider the Alice-Bob problem above with the convex productivity curve, and the fact that splitting the training equitably is the worst possible solution. Management would ideally recognize Alice’s slight superiority and give her all the training, thus finding the optimal “edge case”. But what if Bob managed (convex dishonesty) to convince management that he was slightly superior to Alice and at, say, x = 0.2? Then Bob would get all the training, and Alice would get none, and management would converge on a sub-optimal local maximum. That is the essence of corporate backstabbing, is it not? Management’s increasing awareness of convexity in intellectual work means that it will tend to double down its investment in winners and toss away (fire) the losers. Thus, subordinates put considerable effort into creating the appearance of high potential for the sake of driving management to a local maximum that, if not necessarily ideal for the company, benefits them. That’s what “multiple local optima” means, in practical terms.

The traditional three-tiered corporation has a firm distinction between executives and managers (the third tier being “workers”, who are treated as a landscape feature) and its pertains to this. Because business problems are never entirely concave and orderly, the local “hill climbing” is left to managers, while the convex problems (which, like choosing initial conditions, require non-local insight) such as selecting leaders and business models are left to executives.

Yet with everything concave being performed, or soon to be performed, by machines, we’re seeing convexity pop up everywhere. The question of which programming languages to learn is a convex decision that non-managerial software engineers have to make in their careers. Picking a specialty is likewise; convexity is why it’s of value to specialize. The most talented people today are becoming self-executive, which means that they take responsibility for non-local matters that would otherwise be left to executives, including the direction of their own career. This, however, leads to conflicts with authority.

Older managers often complain about Millennial self-executivity and call it an attitude of entitlement. Actually, it’s the opposite. It’s disentitlement. When you’re entitled, you assume social contracts with other people and become angry when (from your perception) they don’t hold up their end. Millennials leave jobs, and furtively use slow periods to invest in their careers (e.g. in MOOCs) rather than asking for more work. That’s not an act of aggression or disillusion; it’s because they don’t believe the social contract ever existed. It’s not that they’re going to whine about a boss who doesn’t invest in their career– that would be entitlement– because that would do no good. They just leave. They weren’t owed anything, and they don’t owe anything. That’s disentitlement.

Convexity is bad for your job security

Here’s some scary news. When it comes to convex labor, most people shouldn’t be employed. First, let me show a concave input-output graph for worker productivity, assuming even distribution in worker ability from -1.0 to 1.0. Our model also assumes this ability statistic to be inflexible; there’s no training effect.

The blue line, at 82.44, represents the mean worker in the population. Why’s this important? It represents the expected productivity of a new hire off the street. If you’re at the median (x = 0.0) or even a bit below it, you are “above average”. It’s better to retain you than to bring someone in off the street. Let’s say that John is 40th percentile (x = -0.2) hire, which means that his productivity is 90. A random person hired off the street will be better than John, 60% of the time. However, the upside is limited (10 points at most) and the downside (possibly 70 points) is immense so, on average, it’s a terrible trade. It’s better to keep John (a known mediocre worker) on board than to replace him.

With a convex example, we find the opposite to be true:

Here, we have an arrangement in which most people are below the mean, so we’d expect high turnover. Management, one expects, would be inclined to hire people on a “try out” basis with the intention of throwing most of them back on the street. An average or even good (x = 0.5) hire should be thrown out in order to “roll the dice” with a new hire who might be the next star. Is that how managers actually behave? No, because there are frictional and morale reasons not to fire 80% of your people, and because this model’s assumption that people are inflexibly set at a competence level is not entirely true for most jobs, and those where it is true (e.g. fashion modeling) make it easy for management to evaluate someone before a hire is made. In-house experience matters. That is, however, how venture capital, publishing and record labels work. Once you turn out a couple failures, with those being the norm, it might still be that you’re a high performer who’s been unlucky, but you’re judged inferior to a random new entrant (with more upside potential) and flushed out of the system.

In the real world, it’s not so severe. We don’t see 80% of people being fired, and the reason is that, for most jobs, learning matters. The above applies to work at which there’s no learning process, but each worker is inflexibly put at a certain perfectly measurable productivity level. That’s not how the world really works. In-born talent is one relevant input, but there are others like skill, in-house experience, and education that have defensive properties and keep a person’s job security. People can often get themselves above the mean with hard work.

Secondly, the model above assumes workers are paid equally, which is not the case for most convex work. In the convex model above, the star (x = 1.0) might command several times the salary of the average performer (x = 0.0) and he should. That compensation inequality actually creates job security for the rest of them. If the best people didn’t charge more for their work, then employers would be inclined to fire middling performers in the search of a bargain.

This may be one of the reasons why there is such high turnover in the software industry. You can’t a get seasoned options trader for under $250,000 per year, but you can get excellent programmers (who are worth 5-10 times that amount, if given the right kind of work) for less than half of that. This is often individually justified (by the engineer) with an attitude of, “well, I don’t need to be paid millions; I care more about interesting work”. As an individual behavior, that’s fine, but it might be why so many software employers are so quick to toss engineers aside for dubious reasons. Once the manager concludes that the individual doesn’t have “star” potential, it’s worth it to throw out even a good engineer and try again for a shot at a bargain, considering the number of great engineers at mediocre salary levels.

One thing I’ve noticed in software (which is highly convex) is that there’s a cavalier attitude toward firing, and it’s almost certainly related to that “star economy” effect. What’s different is that software convexity has a lot inputs other that personal ability– project/person fit, tool familiarity, team cohesion, and a lot factors that are so hard to detect that they feel like pure luck– in the mix, so the “toss aside all but the best” strategy is severely defective, at least for a larger organization that should be enabling people to find better fitting projects, which makes a lot of sense amid convexity. That’s one of the reasons why I am so dogmatic about open allocation, at least in big companies.

Convexity is risky

Job insecurity amid convexity is an obvious problem, but not damning. If there’s a fixed demand for widgets, a competitor who can produce 10 times more of them is terrifying, because it will crash prices and put everyone else out of business (and, then, become a monopolist and raise them). Call that “red ocean convexity”, where the winners put the losers out of business because a “10X” performer takes 9X from someone else. However, if demand is limitless, then the presence of superior players isn’t always a bad thing. A movie star making $3 million isn’t ruined by one making $40 million. The arts are an example of “blue ocean convexity”, insofar as successful artists don’t make the others poorer, but increase the aggregate demand of art. It’s not “winner-take-all” insofar as one doesn’t have to be the top player to add something people value.

Computational problem solving (not “programming”) is a field where there’s very high demand, so the fact that top performers will produce an order of magnitude more value (the “10X effect”) doesn’t put the rest out of business. That’s a very good thing, because most of those top performers were among “the rest” when they started their career. Not only is there little direct competition, but as software engineers, we tend to admire those “10X” people and take every opportunity we can get to learn from them. If there were more of them, it wouldn’t make us poorer. It would make the world richer.

Is demand for anything limitless, though? For industrial products, no. Demand for televisions, for example, is limited by peoples’ need for them and space to put them. For making peoples’ lives better, yes. For improving processes, sure. Generation of true wealth (as Paul Graham defines it: “stuff people want”) is something for which there’s infinite demand, at least as far as we can see. So what’s the limiting factor? Why can’t everyone work on blue-ocean convex work that makes peoples’ lives better? It comes down to risk. So, let’s look at that. The model I’m going to use is as follows:

  • We only care about the immediate neighborhood of a specific (“typical”) competence level. We’ll call it x = 0.
  • Tasks have a difficulty t between -1.0 and 2.0, which represents the C in the logistic form. B is going to be a constant 4.5; just ignore that. 
  • The harder a task is, the higher the potential payoff. Thus, I’ll set A = 100 * (1 + e^(5*t)). This means that work gets more valuable slightly faster (11% faster) than it gets harder (“risk premium”). The constant term in A is based on the understanding that even very easy (difficulty of -1.0) work has value insofar as it’s time-consuming and therefore people must be paid to do it.
  • We measure risk for a given difficulty t by taking the first derivative of L(x; …), with respect to x, at x = 0. Why? L’(x; …) tells us how sensitive the output (payoff) is to marginal changes in input. We’re modeling unknown input variables and plain luck factors as a random, zero-mean “noise” variable d and assuming that for known competence x the true performance will be L(x + d; …). So this first derivative tells us, at x = 0, how sensitive we are to that unknown noise factor.

What we want to do is assess the yield (expected value) and risk (first derivative of yield) for difficulty levels from -1 to 2 when known x = 0. Here’s a graph of expected yield:

It’s hard to notice on that graph, but there’s actually a slight “dip” or “uncanny valley” as one goes from the extreme of easiness (t = -1.0) to slightly harder (-1.0 < t < 0.0) work:

Does it actually work that way in the real world? I have no idea. What causes this in the model is that, as we go from the ridiculously easy (t = 1.0) to the merely moderately easy (t = 0.5) the rate of potential failure grows faster than the maximum potential A does, as a function of t. That’s an artifact of how I modeled this and I don’t know for sure that a real-world market would have this trait. Actually, I doubt it would. It’s a small dip so I’m not going to worry about it. What we do see is that our yield is approximately constant as a function of difficulty for t from -1.0 to 0.0, where the work is concave for that level of skill; and then it grows exponentially as a function of t from 0.0 to 2.0, where the work is convex. That is what we tend to see on markets. The maximal market value of work (1 + e^(5 * t) in this model) grows slightly faster than difficulty in completing it (1 + e^(4.5*t), here).

However, what we’re interested in is risk, so let me show that as well by graphing the first derivative of L with respect to x (not t!) for each t.

What this shows us, pretty clearly, is monotonic risk increase as the tasks become more difficult. That’s probably not too surprising, but it’s nice to see what it looks like on paper. Notice that the easy work has almost no risk involved. Let’s plot these together. I’ve taken the liberty of normalizing the risk formula (in purple) to plot them together, which is reasonable because our units are abstract:

Let’s look at one other statistic, which will be the ratio between yield and risk. In finance, this is called the Sharpe Ratio. Because the units are abstract (i.e. there’s no real meaning to “1 unit” of competence or difficulty) there is no intrinsic meaning to its scale, and therefore I’ve again taken the liberty of normalizing this as well. That ratio, as a function of task difficulty, looks like this…

…which looks exactly like affine exponential decay. In fact, that’s what it is. The Sharpe Ratio is exponentially favorable for easy work (t < 0.0) and approaches a constant value (1.0 here, because of the normalization) for large t.

What’s the meaning of all this? Well, traditionally, the industrial problem was to maximize yield on capital within a finite “risk budget”. If that’s the case– you’re constrained by some finite amount of risk– then you want to select work according to the Sharpe Ratio. Concave tasks might have less yield, but they’re so low in risk that you can do more of them. For each quantum of risk in your budget, you want to get the most yield (expected value) out of it that you can. This favors the extreme concave labor. This is why industrial labor, for the past 200 years, has been almost all concave. Boring. Reliable. In many ways, the world still is concave and that’s a desirable thing. Good enough is good enough. However, it just so happens that when we, as humans, master a concave task when tend to look for the convex challenge of making it run itself. In pre-technological times, this was done by giving instructions to other people, and making machines as easy as possible for humans to use. In the technological era, it’s done with computers and code. Even the grunt work of coding is given to programs (we call them compilers) so we can focus on the interesting stuff. We’re programming all of that concave work out of human hands. Yes, concave work is still the backbone of the industrial world and always will be. It’s just not going to require humans doing it.

What if, instead, the risk budget weren’t an issue? Let’s say that we have a team of 5 programmers given a year to do whatever they want, and the worst they can do is waste their time, and you’re okay with that maximal-risk outcome (5 annual salaries for a learning experience). They might build something amazing that sells for $100 million, or they might work for a year and have the project still fail on the market. Maybe they do great work, but no one wants it; that’s a risk of creation. In this case, we’re not constrained by risk allocation but by talent. We’ve already accepted the worst possible outcome as acceptable. We want them to be doing convex work, which has the highest yield. Those top-notch people are the limiting resource, not risk allocation.

Convexity requires teamwork

Above, I established that if individual productivity is a convex function of investment in that person, and group performance is a sum of individual productivity, then the optimal solution is to ply one person with resources and starve (and likely fire) the rest. Is that how things actually work? No, not usually. There’s a glaring false assumption, which is the additive model where group performance is a simple sum of individual performances. Real team efforts shouldn’t work that way.

When a team is properly configured, most of their efforts don’t merely add to some pile of assets, but they multiply each others’ productivity. Each works to make the others more successful. I wrote about this advancement of technical maturity (from multiplier to adder) as it pertains to software but I think it’s more general. Warning: incompetent attempts at multiplier efforts are every bit as toxic as incompetent management and will have a divider effect.

Team convexity is a bit unique in the sense that both sides of the logistic “S-curve” are observed. You have synergy (convexity) as the team scales up to a certain size, but congestion (concavity) beyond a certain point. It’s very hard to get team size and configuration right, and typical “Theory Z” management (which attempts to coerce a heterogeneous set of people who didn’t choose each other, and probably didn’t choose the project, into being a team) generally fails at this. It can’t be managed competently from a top-down perspective, despite what many executives say (they are wrong). It has to be grass-roots self-organization. Top-down, closed-allocation management can work well in the Alice/Bob models above where productivity is the sum of individual performances (i.e. team synergies aren’t important) but it fails catastrophically on projects that require interactive, multiplicative effects in order to be successful.

Convexity has different rules

The technological economy is going to be very different, because of the way business problems are formulated. In the industrial economy, capital was held in some fixed amount by a business, whose goal was to gain as much yield (profit or interest) from it while keeping risk within certain bounds deemed acceptable. That made concavity desirable. It still is; stable income with low variation is always a good thing. It’s just that such work no longer requires humans. Concave work has been so commoditized that it’s hard to get a passive profit from it.

Ultimately, I think a basic income is the only way society will be able to handle widespread convexity of individual labor. What does it say about the future? People will either be very highly compensated, or effectively unemployed. There will be an increasing need for unpaid learning while people push themselves from the low, flat region of a convex curve to the high, steep part. Right now, we have a society where people with the means to indulge in that can put themselves on a strong career track, but the majority who have a lifelong need for monthly income end up getting shafted: they become a permanent class of unskilled labor and, by keeping wages low, they actually hold back technological advancement.

Industrial management was risk-reductive. A manager took ownership of some process and his job was to look for ways it could fail, then tried to reduce the sources of error in that process. The rare convex task (choosing a business strategy) was for a higher order of being, an executive. Technological management has to embrace risk, because all the concave work’s being taken by machines. In the future, it will only be economical for a human to do something when perfect completion is unknown or undefinable, and that’s the convex work.

A couple more graphs deserve attention, because both pertain to managerial goals. There are two ways that a manager can create a profit. One is to improve output. The other is to reduce costs. Which is favorable? It depends. Below is a graph that shows productivity ($/hour) as a function of wages for some task where performance is assumed to be convex in wages. The relationship is assumed here to be inflexible and go both ways: better people will expect more in wages, low wages will cause peoples’ out-of-work distractions to degrade their performance. Plotted in purple is the y = x or “break-even” line.

As one can see, it doesn’t even make sense to hire people for this kind of work at less than $68/hour: they’ll produce less than they cost. That “dip” is an inherent problem for convex work. Who’s going to pay people in the $50/hour range so they can become good and eventually move to the $100/hour range (where they’re producing $200/hour work)? This naturally tends toward a “winners and losers” scenario. The people who can quickly get themselves to the $70/hour productivity level (through the unpaid acquisition of skill) are employable, and will continue to grow; the rest will not be able to justify wages that sustain them. The short version: it’s hard to get into convex work.

Here’s a similar graph for concave work:

… and here’s a graph of the difference between productivity and wage, or per-hour profit, on each worker:

So the optimal profit is achieved at $24.45 per hour, where the worker provides $56.33 worth of work in that time. It doesn’t seem fair, but improvements to wages beyond that, while they improve productivity, do not improve it by enough to justify the additional cost. That’s not to say that companies will necessarily set wages to that level. (They might raise them higher to attract more workers, increasing total profit.) Also, here is a case where labor unions can be powerful (they aren’t especially helpful with convex work): in the above, the company would still earn a respectable profit on each worker with wages as high as $55 per hour, and wouldn’t be put out of business (despite managements’ claim that “you’ll break us” at, say, $40) until almost $80.

The tendency of corporate management toward cost-cutting, “always say no”, and Theory-X practices is an artifact of the above result of concavity. So while I can argue that “convexity is unfair” insofar as it encourages inequality of investment and resources, enabling small differences in initial conditions to produce a winner-take-all outcome; concavity produces its own variety of unfairness, since it often encourages wages to go to a very low level, where employers take a massive surplus.

The most important problem…?

Above is a lot about convexity, but I feel like the changeover to convexity in individual labor is the most important economic issue of the 21st century. So if we want to understand why the contemporary, MacLeod-hierarchical, organization won’t survive it, we need a deep understanding of what convexity is and how it works. I think we have that, now.

What does this have to do with Work Sucking? Well, there are a few things we get out of it. First, for the concave work that most of the labor force is still doing…

  • Concave (“commodity”) labor leads to grossly unfair wages. This creates a natural adversity between workers and management on the issue of wage levels. 
  • Management has a natural desire to reduce risk and cut costs, on an assumption of concavity. It’s what they’ve been doing for over 200 years. When you manage concave work, that’s the most profitable thing to do.
  • Management will often take a convex endeavor (e.g. computer programming) and try to treat it as concave. That’s what we, in software, call the “commodity developer” culture that clueless software managers try to shove down hapless engineers’ throats.
  • Stable, concave work is disappearing. Machines are taking it over. This isn’t a bad thing (on the contrary, it’s quite good) but it is eroding the semi-skilled labor base that gave the developed world a large middle class.

Now, for the convex:

  • Convex work favors low employment and volatile compensation. It’s not true that there “isn’t a lot of convex work” to go around. In fact, there’s a limitless amount of demand for it. However, one has to be unusually good for a company to justify paying for it at a level one could live on, because of the risk. Without a basic income in place, convexity will generate an economy where income volatility is at a level beyond what people are able to accept. As a firm believer in the need for market economies, this must be addressed.
  • Convex payoffs produce multiple optima on personnel matters (e.g. training, leadership). This sounds harmless until one realizes that “multiple optima” is a euphemism for “office politics”. It means there isn’t a clear meritocracy, as performance is highly context-sensitive.
  • Convex work often creates a tension between individual competition and teamwork. Managers attempting to grade individuals in isolation will create a competitive focus on individual productivity, because convexity rewards acceleration of small individual differences. This managerial style works for simple additive convexity, but fails in an organization that needs people to have multiplicative or synergistic effects (team convexity) and that’s most of them.

Red and blue ocean convexity

One of the surprising traits of convexity, tied-in with the matter of teamwork, is that it’s hard to predict whether it will be structurally cooperative or competitive. This leads me to believe that there are fundamental differences between “red ocean” and “blue ocean” varieties of convexity. For those unfamiliar with the terms, red ocean refers to well-established territory in which competition is fierce. There’s a known high quantity of resources (“blood in the water”) available but there’s a frenzy of people (some with considerable competitive advantages) working to get at it. It’s fierce and if you aren’t strong, the better predators will crowd you out. Blue ocean refers to unexplored territory where the yields are unknown but the competition’s less fierce (for now).

I don’t know this industry well, but I would think that modeling is an example of red-ocean convexity. Small differences in input (physical attractiveness, and skill at self-marketing) result in massive discrepancies of output, but there’s a small and limited amount of demand for the work. If there’s a new “10X model” on the scene, all the other models are worse off, because the supermodel takes up all of the work. For example, I know that some ridiculous percentage of the world’s hand-modeling is performed by one woman (who cannot live a normal life, due to her need to protect her hands).

What about professional sports, the distilled essence of competition? Blue ocean. Yep. That might seem surprising, given that these people often seem to want to kill each other, but the economic goal of a sports team is not to win games, but to play great games that people will pay money to watch. A “10X” player might revitalize the reputation of the sport, as Tiger Woods did for golf, and expand the audience. Top players actually make a lot of money for the opponents they defeat; the stars get a larger share of the pool, meaning their opponents get a smaller percentage, but they also expand that pool so much that everyone gets richer.

How about the VC-funded startup ecosystem? That’s less clear. Business formation is blue ocean convexity, insofar as there are plenty of untapped opportunities to add immense value, and they exist all over the world. However, fund-raising (at least, in the current investor climate) and press-whoring are red ocean convexity: a few already-established (and complacent) players get the lion’s share of the attention and resources, giving them an enormous head start. Indeed, this is the point of venture capital in the consumer-web space: use the “rocket fuel” (capital infusion) to take a first-entrant advantage before anyone else has a shot.

Red and blue ocean convexity are dramatically different in how they encourage people to think. With red-ocean convexity, it’s truly a ruthless, winner-take-all, space because the superior, 10X, player will force the others out of business. You must either beat him or join him. I recommend “join”. With blue-ocean convexity (which is the force that drives economic growth) outsized success doesn’t come at the expense of other people. In fact, the relationship may be symbiotic and cooperative. For example, great programmers build tools that are used all over the world and make everyone better at their jobs. So while there is a lot of inequality in payoffs– Linus Torvalds makes millions per year, I use his tools– because that’s how convexity works, it’s not necessarily a bad thing because everyone can win.

Convexity and progress

Convexity’s most important property is progressive time. Real-world convexity curves are often steeper than the ones graphed above and, if there isn’t a role for learning, then the vast majority of people will be unable to achieve at a level supporting an income, and thus unemployed. For example, while practice is key in (highly convex) professional sports, there aren’t many people who have the natural talent to earn a living at it. Convexity shuts out those without natural talent. Luckily for us and the world, most convex work isn’t so heavily influenced by natural limitations, but by skills, specialization and education. There’s still an elite at the rightward side of the payoff distribution curve that takes the bulk of the reward, but it’s possible for a diligent and motivated person to enter that elite by gaining the requisite skills. In other words, most of the inputs into that convex payoff function are within the individual actor’s control. This is another case of “good inequality”. In blue-ocean convexity, we want the top players to reap very large rewards, because it motivates more people to do the work that gets them there. 

Consider software engineering, which is perhaps the platonic ideal of blue-ocean convexity. What retards us the most as an industry is the lack of highly-skilled people. As an industry, we contend with managerial environments tailored to mediocrity, and suffer from code-quality problems that can reduce a technical asset’s real value to 80, 20, or even minus-300 cents on the dollar compared to its book value. Good software engineers are rare, and that hurts everyone. In fact, perhaps the easiest way to add $1 trillion in value to the economy would be to increase software engineer autonomy. Because most software engineers never get the environment of autonomy that would enable them to get any good, the whole economy suffers. What’s the antidote? A lot of training and effort– the so-called “10000 hours” of deliberate practice– that’s generally unpaid in this era of short-term, disposable jobs.

Convexity’s fundamental problem is that it requires highly-skilled labor, but no employer is willing to pay for people to develop the relevant skills, out of a fear that employees who drive up their market value will leave. In the short term, it’s an effective business strategy to hire mediocre “commodity developers” and staff them on gigantic teams for uninspiring projects, and give them work that requires minimal intellectual ability aside from following orders. In the long term, those developers never improve and produce garbage software that no one knows how to maintain, producing creeping morale decay and, sometimes, “time bombs” that cause huge business losses at unknown times in the future.

That’s why convexity is such a major threat to the full-employment society to which even liberal Americans still cling. Firms almost never invest in their people– empirically, we see that– in favor of the short-term “solution”, which is to ignore convexity and try to beat the labor context into concavity, that is terrible in the long term. Thus, even in convex work, the bulk of people linger at the low-yield leftward end of the curve. Their employers don’t invest in them, and often they lack the time and resources to invest in themselves. What we have, instead of blue-ocean convexity, is an economy where the privileged (who can afford unpaid time for learning) become superior because they have the capital to invest in themselves, and the rest are ignored and fall into low-yield commodity work. This was socially stable when there was a lot of concave, commodity work for humans to do, but that’s increasingly not the case.

Someone is going to have to invest in the long term, and to pay for progress and training. Right now, privileged individuals do it for themselves and their progeny, but that’s not scalable and will not avert the social instability threatened by systemic, long-term unemployment.

Trust and convexity

As I’ve said, convexity isn’t only a property of the relationship between individual inputs (talent, motivation, effort, skill) and productivity, but also occurs in team endeavors. Teams can be synergistic, with peoples’ efforts interacting multiplicatively instead of additively. That’s a very good thing, when it happens.

So it’s no surprise that large accomplishments often require multiple people. We already knew that! That is less true in 2013 than it was 1985– now, a single person can build a website serving millions– but it’s still the case. Arguably, it’s more the case now; it’s only that many markets have become so efficient that interpersonal dependencies “just work” and give more leverage to single actors. (The web entrepreneur is using technologies and infrastructure built by millions of other people.) At any rate, it’s only a small space of important projects that will be accomplished well by a single party, acting alone. For most, there’s a need to bring multiple people together, but to retain focus and that requires interior political inequalities (leadership) to the group.

We’re hard-wired to understand this. As humans, we fundamentally get the need for team endeavors with strong leadership. That’s why we enjoy team sports so much.

Historically, there have been three “sources of power” that have enabled people to undertake and lead large projects (team convexity):

  • coercion, which exists when negative consequences are used to motivate someone to do work that she wouldn’t otherwise do. This was the cornerstone of pre-industrial economies (slavery) but is also used, in a softer form, by ineffective managers: do this or lose your income/reputation. Anyway, coercion is how the Egyptian pyramids were built: coercive slave labor.
  • divination, in which leaders are elected based on an abstract principle, which may be the whim of a god, legal precedent, or pure random luck. For example, it has been argued that gambling (a case of “pure random luck”) served a socially positive purpose on the American frontier. Although it moved funds “randomly”, it allowed pools of capital to form, financing infrastructural ventures. Something like divination is how the cathedrals were built: voluntary labor, motivated by religious belief, directed by architects who often were connected with the Church. Self-divination, which tends to occur in a pure power vacuum, is called arrogation.
  • aggregation, where an attempt to compute, fairly, the group preference or the true market value of an asset is made. Political elections and financial markets are aggregations. Aggregation is how the Internet was built: self-directed labor driven by market forces.

When possible, fair aggregations are the most desirable, but it’s non-trivial to define what fair is. Should corporate management be driven by the one-dollar, one-vote system that exists today? Personally, I don’t think so. I think it sucks. I think employees deserve a vote simply because they have an obvious stake in the company. As much as the current, right-wing, state of the American electorate infuriates me, I really like the fact that citizens have the power to fire bad politicians. (They don’t use it enough; incumbent victory rates are so high that a bad politician has more job security than a good programmer.) Working people should have the same power over their management. By accepting a wage that is lower than the value of what they produce, they are paying their bosses. They have a right to dictate how they are managed, and to insist on the mentorship and training that convexity is making essential.

Because it’s so hard to determine a fair aggregation in the general case, there’s always some room for divination and arrogation, or even coercion in extreme cases. For example, our Constitution is a case of (secular, well-informed) divination on the matter of how to build a principled, stable and rational government, but it sets up an aggregation that we use elect political leaders. Additionally, if a political leader were voted out of office but did not hand over power, he’d be pushed out of it by force (coercion). Trust is what enables self-organizing (or, at least, stable) divination. People will grant power to leaders based on abstract principles if they trust those ideas, and they’ll allow representatives to act on their behalf if they trust those people.

Needless to say, convex payoffs to group efforts generate an important role for trust. That’s what the “stone soup” parable is about; because there’s no trust in the community, people hoard their own produce instead of sharing, and no one has had a decent meal for months. When outside travelers offer a nonexistent delicacy– the stone is a social catalyst with no nutritional value– and convince the other villagers to donate their spare produce, they enable them all to work together. So they get a nutritious bowl of soup and, one hopes, they can start to trust each other and build at least a barter or gift economy. They all benefit from the “stone soup”, but they were deceived.

Convex dishonesty isn’t always bad. It is the act of “borrowing” trust by lying to people, with the intent to pay them back out of the synergistic profits. Sometimes convex dishonesty is exactly what a person needs to do in order to get something accomplished. Nor is it always good. Failed convex frauds are damaging to morale, and therefore they often exacerbate the lack-of-trust problem. Moreover, there are many endeavors (e.g. pyramid schemes) that have the flavor of convex fraud but are, in reality, just fraud.

This, in fact, is why modern finance exists. It’s to replace the self-divinations that pre-financial societies required to get convex projects done with a fairer aggregation system that properly measures, and allows the transfer of, risks.

Credibility

For macroscopic considerations like the fair prices of oil or business equity, financial aggregations seem to work. What about the micro-level concern of what each worker should do on a daily basis? That usually exists in the context of a corporation (closed system) with specific authority structures and needs. Companies often attempt to create internal markets (tough culture) for resources and support, with each team’s footprint measured in internal “funny money” given the name of dollars. I’ve seen how those work, and they often become corrupt. The matter of how people direct the use of their time is based on an internal social currency (including job titles, visibility, etc.) that I’ve taken to calling credibility. It’s supposed to create a meritocracy, insofar as the only way one is supposed to be able to get credibility is through hard work and genuine achievement, but it often has some severely anti-meritocratic effects. 

So why does your job (probably) Suck? Your job will generally suck if you lack credibility, because it means that you don’t control your own time, have little choice over what you do and how you do it, and that your job security is poor. Your efforts will be allocated, controlled, and evaluated by an external party (a manager) whose superiority in credibility grants him the right of self-divination. He gets to throw your time into his convex project, but not vice versa. You don’t have a say in it. Remember: he’s got credibility, and you lack it. 

Credibility always generates a black market. There is no failing in this principle. Performance reviews are gamed, with various trades being made wherein managers offer review points in exchange for non-performance-related favors (such as vocal support for an unrelated project, positive “360-degree reviews”, and various considerations that are just inappropriate and won’t be discussed here) and loyalty. Temporary strongmen/thugs use transient credibility (usually, from managerial favoritism) to intimidate and extort other people into sharing credit for work accomplished, thus enabling the thug to appear like a high performer and get promoted to a real managerial role (permanent credibility). You win on a credibility market by buying and selling it for a profit, creating various perverted social arbitrages. No organization that has allowed credibility to become a major force has avoided this.

Now I can discuss the hierarchy as immortalized by this cartoon from Hugh MacLeod:

zzzzazzdggg49.jpg

Losers are not undesirable, unpopular, or useless people. In fact, they’re often the opposite. What makes them “losers” is that, in an economic sense, they’re losing insofar as they contribute more to the organization than they get out of it. Why do they do this? They like the monthly income and social stability. Sociopaths (who are not bad people; they’re just gamblers) take the other side of that risk trade. They bear a disproportionate share of the organization’s risk and work the hardest, but they get the most reward. They have the most to lose. A Loser who gets fired will get another job at the same wage; a Sociopath CEO will have to apply for subordinate positions if the company fails. Clueless are a level that forms later on when this risk transfer becomes degenerate– the Sociopaths are no longer putting in more effort or taking more risk than anyone else, but have become an entitled, complacent rent-seeking class– and they need a middle-management layer of over-eager “useful idiots” to create the image (Effort Thermocline) that the top jobs are still demanding.

What’s missing in this analysis? Well, there’s nothing morally wrong, at all, with a financial risk transfer. If I had a resource that had a 50% chance of yielding $10 million, and 50% chance of being worthless, I’d probably sell it to a rich person (whose tolerance of risk is much greater) for $4.9 million to “lock in” that amount. A +5-million-dollar swing in personal wealth is huge to me and minuscule to him. It’d be a good trade for both of us. I’d be paying a (comparably small) $100,000 risk premium to have that volatility out of my financial life. I’m not a Loser in this deal, and he’s not a Sociopath. It’s by-the-book finance, how it’s supposed to work.

What generates the evil, then? Well, it’s the credibility market. I don’t hold the individual firm responsible for prevailing financial scarcity and, thus, the overwhelmingly large number of people willing to make low-expectancy plays. As long as that firms pays its people reasonably, it has clean hands. So the financial Loser trade is not a sign of malfeasance. The credibility market’s different, because the organization has control over it. It creates the damn thing. Thus, I think the character of the risk transfer has several phases, each deserving its own moral stance:

  1. Financial risk transfer. Entrepreneurs put capital and their reputations at risk to amass the resources necessary to start a project whose returns are (macroscopically, at least) convex. This pool of resources is used to pay bills and wages, therefore allowing workers to get a reliable, recurring monthly wage that is somewhat less than the expected value of their contribution. Again, there’s nothing morally wrong here. Workers are getting a risk-free income (so long as the business continues to exist) while participating in the profits of industrial macro-convexity. 
  2. De-risking, entrenchment, and convex fraud. As the business becomes more established, its people stop viewing it as a risk transfer between entrepreneurs and workers, and start seeing it (after the company’s success is obvious) as a pool of “free” resources to gain control over. Such resources are often economic (“this place has millions of dollars to fund my ideas”) but reputation (“imagine what I could do as a representative of X”) is also a factor. People begin making self-divination (convex fraud) gambits to establish themselves as top performers and vault into the increasingly complacent, rent-seeking, executive tier. This is a red-ocean feeding frenzy for the pile of surplus value that the organization’s success has created.
  3. Credibility emerges, and becomes the internal currency. Successful convex fraudsters are almost always people who weren’t part of the original founding team. They didn’t get their equity when it was cheap, so now they’re in an unstable positions. They’re high-ranking managers, but haven’t yet entwined themselves with the business or won a significant share of the rewards/equity. Knowing that their success is a direct output of self-divination (that is, arrogation) they use their purloined social standing to create official credibility in the forms of titles (public statements of credibility), closed allocation (credibility as a project-maker and priority-setter), and performance reviews (periodic credibility recalibrations). This turns the unofficial credibility they’ve stolen into an official, secure kind.
  4. Panic trading, and credibility risk transfer. Newly formed businesses, given their recent memory of existential risk, generally have a cavalier attitude toward firing and a tough culture, which I’ll explain below. This means that a person can be terminated not because of doing anything wrong or being incompetent, but just because of an unlucky break in credibility fluctuations (e.g. a sponsor who changes jobs, a performance-review “vitality curve”). In role-playing games, this is the “killed by the dice” question: should the GM (game coordinator who functions as a neutral party, creating and directing the game world) allow characters, played well, to die– really die, in the “create a new character” sense, not in the “miraculously resurrected by a level-18 healer” sense– because of bad rolls of the dice? In role-playing games, it’s a matter of taste. Some people hate games where they can lose a character by random chance; others like the tension that it creates. At work, though, “killed by the dice” is always bad. Tough-culture credibility markets allow good employees to be killed by the dice. In fact, when stack-ranking and “low performer” witch hunts set in, they encourage it. This creates a lot of panic trading and there’s a new risk transfer in town. It’s not the morally acceptable and socially-positive transfer of financial risk we saw in Stage 1. Rather, it’s the degenerate black-market credibility trading that enables the worst sorts of people (true psychopaths) to rise.
  5. Collapse into feudalistic rank culture. No one wants a job where she can be fired “for performance” because of bad luck, so tough cultures don’t last very wrong; they turn into rank cultures. People (Losers) panic-trade their credibility, and would rather subordinate to get some credibility (“protection”) from a feudal lord (Sociopath) than risk having none and being flushed out. The people who control the review process become very powerful and, eventually, can manufacture enough of an image of high performance to become official managers. You’re no longer going to be killed by the dice in a rank culture, but you can be killed by a manager because he can unilaterally reduce your credibility to zero.
  6. Macroscopic underperformance and decline. Full-on rank culture is terribly inefficient, because it generates so much fourth-quadrant work that serves the need of local extortionists (usually, middle managers and their favorites) but does not help the business. Eventually, this leads to underperformance of the business as a whole. Rank culture fosters so much incompetence that trust breaks down within the organization, and it’s often permanent. Firing bad apples is no longer possible, because the process of flushing them away would require firing a substantial fraction of the organization, and that would become so politicized and disruptive as to break the company outright. Such companies regularly lapse into brief episodes of “tough culture”, when new executives (usually, people who buy it as its market value tanks) decide that it’s time to flush out the low performers, but they usually do it in a heavy-handed, McKinsey-esque way that creates a new and equally toxic credibility market. But… like clockwork, those who control said black markets become the new holders of rank and, soon enough, the official bosses. These mid-level rank-holders start out as the mean-spirited witch-hunters (proto-Sociopaths) who implement the “low performer initiative” but they eventually rise and leave a residue of strategically-unaware, soft, complacent and generally harmless mid-ranking “useful idiots” (new Clueless). Clueless are the middle managers who get some power when the company lurches into a new rank culture, but don’t know how to use it and don’t know the main rule of the game of thrones: you win or you die.
  7. Obsolescence and death. Self-explanatory. Some combination of rank-culture complacency and tough-culture moral decay turn the company into a shell of what it once was. The bad guys have taken out their millions and are driving up house prices in the area and their wives with too much plastic surgery are on zoning committees keeping those prices high; everyone else who worked at the firm is properly fucked. Sell off the pieces that still have value, close the shop.

That cycle, in the industrial era, used to play out over decades. If you joined a company in Stage 1 in 1945, you might start to see the Stage 4 midlife when you retired in 1975. Now, it happens much more quickly: it goes down over years, and sometimes months for fast-changing startups. It’s much more of an immediate threat to personal job security than it has ever been before. Cultural decay used to be a long-term existential risk to companies not taken seriously because calamity was decades away; now, it’s often ongoing and rapid thanks to the “build to flip” mentality.

To tell the truth about it, the MacLeod rank culture wasn’t such a bad fit for the industrial era. Industrial enterprises had a minimal amount of convex work (choosing the business model, setting strategies) that could be delegated to a small, elite, executive nerve-center. Clueless middle managers and rationally-disengaged (Loser) wage earners could implement ideas delivered from the top without too much introspection or insight, and that was fine because individual work was concave. Additionally, that small set of executives could be kept close to the owners of the company (if they weren’t the same set of people).

In the technological era, individual labor is convex and we can no longer afford Cluelessness, or Loserism. The most important work– and within a century or so, all work where there’s demand for humans to do it– requires self-executivity. The hierarchical corporation is a brachiosaur sunning itself on the Yucatan, but that bright point of light isn’t the sun.

Your job is a call option

If companies seem to tolerate, at least passively, the inefficiency of full-blown rank culture, doesn’t that mean that there isn’t a lot of real work for them to do? Well, yes, that’s true. I’ve already discussed the existence of low-yield, boring, Fourth Quadrant busywork that serves little purpose to the business. It’s not without any value, but it doesn’t do much for a person’s career. Why does it exist? First, let’s answer this: where does it come from?

Companies have a jealously-guarded core of real work: essential to the business, great for the careers of those who do it. The winners of the credibility market get the First Quadrant (1Q) of interesting and essential work. They put themselves on the “fun stuff” that is also the core of the business– it’s enjoyable, and it makes a lot of money for the firm and therefore leads to high bonuses. There isn’t a lot of work like this, and it’s coveted, so few people can be in this set. Those are akin to feudal lords, and correspond with MacLeod Sociopaths. Those who wish to join their set, but haven’t amassed enough credibility yet, take on the less enjoyable, but still important Second Quadrant (2Q) of work: unpleasant but essential. Those are the vassals attempting to become lords in the future. That’s often a Clueless strategy because it rarely works, but sometimes it does. Then there is a third monastic category of people who have enough credibility (got into the business early, usually) to sustain themselves but have no wish to rise in the organizational hierarchy. They work on fun, R&D projects that aren’t in the direct line of business (but might be, in the future). They do what’s interesting to them, because they have enough credibility to get away with that and not be fired. They work on the Third Quadrant (3Q): interesting but discretionary. How they fit into the MacLeod pyramid is unclear. I’d say they’re a fortunate sub-caste of Losers in the sense that they rationally disengage from the power politics of the essential work; but they’re Clueless if they’re wrong about their job security and get fired. Finally, who gets the Fourth Quadrant (4Q) of unpleasant and discretionary work? The peasants. The Losers without the job security of permanent credibility are the ones who do that stuff, because they have no other choice.

Where does the Fourth Quadrant work come from? Clueless middle-managers who take undesirable (2Q) or unimportant (3Q) projects, but manage to take all the career upside (turning 2Q into 4Q for their reports) and fun work (turning 3Q into 4Q) for themselves, leaving their reports utterly hosed. This might seem to violate their Cluelessness; it’s more Sociopathic, right? Well, MacLeod “Clueless” doesn’t mean that they don’t know how to fend for themselves. It means they’re non-strategic, or that they rarely know what’s good for the business or what will succeed in the long-term. They suck at “the big picture” but they’re perfectly capable of local operations. Additionally, some Clueless are decent people; others are very clearly not. It is perfectly possible to be MacLeod Clueless and also a sociopath.

Why do the Sociopaths in charge allow the blind Clueless to generate so much garbage make-work? The answer is that such work is evaluative. The point of the years-long “dues paying” period is to figure out who the “team players” are so that, when leadership opportunities or chances for legitimate, important work open up, the Sociopaths know which of the Clueless and Losers to pick. In other words, hiring a Loser subordinate and putting him on unimportant work is a call option on a key hire, later.

Workplace cultures

I mentioned rank and tough cultures above, so let me get into more detail of what those are. In general, an organization is going to evaluate its individuals based on three core traits:

  • subordinacy: does this person put the goals of the organization (or, at least, his immediate team and supervisor) above her own?
  • dedication: will she do unpleasant work, or large amounts of work, in order to succeed?
  • strategy: does she know what is worth working on, and direct her efforts toward important things?

People who lack two or all three of these core traits are generally so dysfunctional that all but the most nonselective employers just flush them out. Those types– such as the strategic, not-dedicated, and insubordinate Passive-Aggressive and the dedicated, insubordinate, and not-strategic Loose Cannon– occasionally pop up for comic relief, but they’re so incompetent that they don’t last long in a company and are never in contention for important roles. I call them, as a group, the Lumpenlosers.

MacLeod Losers tend to be strategic and subordinate, but not dedicated. They know what’s worth working on, but they tend to follow orders because they’re optimizing for comfort, social approval, and job security. They don’t see any value in 90-hour weeks (which would compromise their social polish) or radical pursuit of improvement (which would upset authority). They just want to be liked and adjust well to the cozy, boring, middle-bottom. If you make a MacLeod Loser work Saturdays, though, she’ll quit. She knows that she can get a similar or better job elsewhere.

MacLeod Clueless are subordinate and dedicated but not strategic. They have no clue what’s worth working on. They blindly follow orders, but will also put in above-board effort because of an unconditional work ethic. They frequently end up cleaning up messes made by Sociopaths above and Losers below them. They tend to be where the corporate buck actually stops, because Sociopaths can count on them to be loyal fall guys.

MacLeod Sociopaths are dedicated and strategic but insubordinate. They figure out how the system works and what is worth putting effort into, and they optimize for personal yield. They’re risk-takers who don’t mind taking the chance of getting fired if there’s also a decent likelihood of a promotion. They tend to have “up-or-out” career trajectories, and job hopping isn’t uncommon.

Since there are good Sociopaths out there, I’ve taken to calling the socially positive ones the Technocrats, who tend to be insubordinate with respect to immediate organizational authority, but have higher moral principles rooted in convexity: process improvements, teamwork and cooperation, technical and infrastructural excellence. They’re the “positive-sum” radicals.  I’ll get back to them.

Is there a “unicorn” employee who combines all three desired traits– subordinacy, dedication, and strategy? Yes, but it’s strictly conditional upon a particular set of circumstances. In general, it’s not strategic to be subordinate and dedicated. If you’re strategic, you’ll usually either optimize for comfort and be subordinate, but not dedicated, because that’s uncomfortable. If you follow orders, it’s pretty easy to coast in most companies. That’s the Loser strategy. Or, you might optimize for personal yield and work a bit harder, becoming dedicated, but you won’t do it for a manager’s benefit: it’s either your own, or some kind of higher purpose. That’s the Sociopath strategy. The exception is a mentor/protege relationship. Strategic and dedicated people will subordinate if they think that the person in authority knows more than they do, and is looking out for their career interests. They’re subordinating to a mentor conditionally, based on the understanding that they will be in authority, or at least able to do more interesting and important work, in the future.

From this understanding, we can derive four common workplace cultures:

  • rank cultures value subordinacy above all. You can coast if you’re in good graces with your manager, and the company ultimately becomes lazy. Rank cultures have the most pronounced MacLeod pyramid: lazy but affable Losers, blind but eager Clueless, and Sociopaths at the top looking for ways to gain from the whole mess. 
  • tough cultures value dedication, and flush out the less dedicated using informal social pressure and formal performance reviews. It’s no longer acceptable to work a standard workweek; 60 hours is the new 40. Tough culture exists to purge the Loser tier, splitting it between the neo-Clueless sector and the still-Loser rejects, which it will fire if they don’t quit first. So the MacLeod pyramid of a tough culture is more fluid, but every bit as pathological.
  • self-executive cultures value strategy. Employees are individually responsible for directing their own efforts into pursuits that are of the most value. This is the open allocation for which Valve and Github are known. Instead of employees having to compete for projects (tough culture) or managerial support (rank culture) it is the opposite. Projects compete for talent on an open market, and managers (if they exist) must operate in the interests of those being managed. There is no MacLeod hierarchy in a self-executive culture.
  • guild culture values a balance of the three. Junior employees aren’t treated as terminal subordinates but as proteges who will eventually rise into leadership/mentoring positions. There isn’t a MacLeod pyramid here; to the extent that there may be undesirable structure, it has more to do with inaccurate seniority metrics (e.g. years of experience) than with bad-faith credibility trading. 

Rank and guild cultures are both command cultures, insofar as they rely on central planning and global (within the institution) rule-setting. Top management must keep continual awareness of how many people are at each level, and plan out the future accordingly. Tough and self-executive cultures are market cultures, because they require direct engagement with an organic, internal market.

The healthy, “Theory Y” cultures are the guild and self-executive cultures. These confer a basic credibility on all employees, which shuts off the panic trading that generates the MacLeod process. In a guild culture, each employee has credibility for being a student who will grow in the future. In self-executive culture, each employee has power inherent in the right to direct her efforts to the project she considers most worthy. Bosses and projects competing for workers is a Good Thing. 

The pathological, “Theory X” cultures are the rank and tough cultures. It goes without saying that most rank cultures try to present themselves as guild cultures– but management has so much power that it need not take any mentorship commitments seriously. Likewise, most tough cultures present themselves as self-executive ones. How do you tell if your company has a genuinely healthy (Theory Y) culture? Basic credibility. If it’s there, it’s the good kind. If it’s not, it’s the bad kind of culture.

Basic credibility

In a healthy company, employees won’t be “killed by the dice”. Sure, random fluctuations in credibility and performance might delay a promotion for a year or two, but the panicked credibility trading of the Theory-X culture isn’t there. People don’t fear their bosses in a Theory-Y culture; they’re self-motivated and fear not doing enough by their own standards– because they actually care. Basic credibility means that every employee is extended enough credibility to direct his own work and career.

That does not mean people are never fired. If someone punches a colleague in the face or steals from the company, you fire him, but it has nothing to do with credibility. You get rid of him because, well, he did something illegal and harmful. What it does mean is that people aren’t terminated for “performance reasons” that really mean either (a) they were just unlucky and couldn’t get enough support to save them in tough-culture “stack ranking”, or (b) their manager disliked them for some reason (no-fault lack-of-fit, or manager-fault lack-of-fit). It does mean that people are permitted to move around in the company, and that the firm might tolerate a real underperformer for a couple of years. Guess what? In a convex world, underperformance almost doesn’t matter.

With convexity, the difference between excellence and mediocrity matters much more than that between mediocrity and underperformance. In a concave world, yes, you must fire underperformers because the margin you get on good employees is so low that one slacker can cancel out 4 or 5 good people. In a convex world, the danger isn’t that you have a few underperformers. You will have, at the least, good-faith low-performers, just because the nature of convexity is to create risk and inequality of return and some peoples’ projects won’t pan out. Thjat’s fine. Instead, the danger is that you don’t have any excellent (“10x”) employees.

There’s a managerial myth that cracking down on “low performers” is useful because they demotivate the “10x-ers”. Yes and no. Incompetent management and having to work around bad code are devastating and will chase out your top performers. If 10xer’s have to work with incompetents and have no opportunity to improve them, they get frustrated and quit. There are toxic incompetents (dividers) who make others unproductive and damage morale, and then there are low-impact employees who just need more time (subtracters). Subtracters cost more in salary than they deliver, but they aren’t hurting anyone and they will usually improve. Fire dividers immediately. Give subtracters a few years (yes, I said years) to find a fit. Sometimes, you’ll hire someone good and still have that person end up as a subtracter at first. That common in the face of convexity– and remember that convexity is the defining problem of the 21st-century business world. The right thing to do is to let her keep looking for a fit until she finds one. Almost never will it take years if your company runs properly.

“Low performer initiatives” rarely smoke out the truly toxic dividers, as it turns out. Why? Because people who have defective personalities and hurt other peoples’ morale and productivity are used to having their jobs in jeopardy, and have learned to play politics. They will usually survive. It’ll be unlucky subtracters you end up firing. You might save chump change on the balance sheet, but you’re not going to fix the real organizational problems.

Theories X, Y, and Z

I grouped the negative workplace cultures (rank and tough) together and called them Theory X; the positive ones (self-executive and guild) I called Theory Y. This isn’t my terminology; it’s about 50 years old, coming from Douglas MacGregor. The 1960s was the height of Theory Y management, so that was the “good” managerial style. Let’s compare them and see what they say.

Recall what I said about the “sources of power”: coercion, divination, and aggregation. Coercion was, by far, the predominant force in aggregate labor before 1800. Slavery, prisons, and militaries (with, in that time, lots of conscription) were the inspirations for the original corporations, and the new class of industrialists was very cruel: criminal by modern standards. Theory X was the norm. Under Theory X, workers are just resources. They have no rights, no important desires, and should be well-treated only if there’s an immediate performance benefit. Today, we recognize that as brutal and psychotic, but for a humanity coming off over 100,000 years of male positional violence and coerced labor, the original-sin model of work shouldn’t seem far off. Theory X held that employees are intrinsically lazy and selfish and will only work hard if threatened.

Around 1920, industrialists began to realize that, even though labor in that time mostly was concave, it was good business to be decent to one’s workers. Henry Ford, a rabid anti-Semite, was hardly a decent human being, much less “a nice guy”, but even he was able to see this. He raised wages, creating a healthy consumer base for his products. He reduced the workday to ten hours, then eight. The long days just weren’t productive. Over the next forty years, employers learned that if workers were treated well, they’d repay the favor by behaving better and working harder. This lead to the Theory Y school of management, which held that people were intrinsically altruistic and earnest, and that management’s role was to nurture them. This gave birth to the paternalistic corporation and the bilateral social contracts that created the American middle class.

Theory Y failed. Why? It grew up in the 1940s to ’60s, when there was a prosperous middle class, but in a time of very low economic inequality. One thing that would amaze most Millennials is that, when our parents grew up, the idea that a person would work for money was socially unacceptable. You just couldn’t say that you wanted to get rich, in 1970, and not be despised for it. And it was very rare for a person to make 10 times more than the average citizen! However, the growth of economic inequality that began in the 1970s, and accelerated since then, raised the stakes. Then the Reagan Era hit.

Most of the buyout/private equity activity that happened in the 1980s had a source immortalized by the movie Wall Street: industrial espionage, mostly driven by younger people eager to sell out their employers’ secrets to get jobs from private equity firms. There was a decade of betrayal that brutalized the older, paternalistic corporations. Given, by a private equity tempter, the option of becoming CEO immediately through chicanery, instead of working toward it for 20 years, many took the former. Knives came out, backs were stabbed, and the most trusting corporations got screwed.

Since the dust settled, around 1995, the predominant managerial attitude has been Theory Z. Theory X isn’t socially acceptable, and Theory Y’s failure is still too recently remembered. What’s Theory Z? Theory X takes a pessimistic view of workers and distrusts everyone. Theory Y takes an optimistic view of human nature and becomes too trusting. Theory Z is the most realistic of the three: it assumes that people are indifferent to large organizations (even their employers) but loyal to those close to them (family, friends, immediate colleagues, distant co-workers; probably in that order). Human nature is neither egoistic or altruistic, but localistic. This was an improvement insofar as it holds a more realistic view of how people are. It’s still wrong, though.

What’s wrong with Theory Z? It’s teamist. Now, when you have genuine teamwork, that’s a great thing. You get synergy, multiplier effects, team convexity– whatever you want to call it, I think we all agree that it’s powerful. The problem with the Theory-Z company is that it tries to enforce team cohesion. Don’t hire older people; they might like different music! Buy a foosball table, because 9:30pm diversions are how creativity happens! This is more of a cargo cult than anything founded in reasonable business principles, and it’s generally ineffective. Teamism reduces diversity and makes it harder to bring in talent (which is critical, in a convex world). It also tends toward general mediocrity.

Each Theory had a root delusion in it. Theory X’s delusion was that morale didn’t matter; workers were just machines. Theory Y’s delusion is rooted in the tendency for “too good” people to think everyone else is as decent as they are; it fell when the 1980s made vapid elitism “sexy” again, and opportunities to make obscene wealth in betraying one’s employer emerged. Theory Z’s delusion is that a set of people who share nothing other than a common manager constitute a genuine (synergistic) team. See, in an open-allocation world, you’re likely to get team synergies because of the self-organization. People would naturally tend to form teams where they make each other more productive (multiplier effects). It happens at the grass-roots level, but can’t be forced in people who are deprived of autonomy. With closed-allocation, you don’t get that. People (with diverging interests) are brought together by force outside of their control and told to be a team. Closed-allocation Theory Z lives in denial of how rare those synergistic effects actually are.

I mentioned, previously an alternative to these 3 theories that I’ve called Theory A, which is a more sober and realistic slant on Theory Y: trust employees with their own time and energy; distrust those who want to control others. I’ll return to that in Part 22, the conclusion.

Morality, civility, and social acceptability

The MacLeod Sociopaths that run large organizations are a corrosive force, but what defines them isn’t true psychopathy, although some of them are that. There are also plenty of genuinely good people who fit the MacLeod Sociopath archetype. I am among them. What makes them dangerous is that the organization has no means to audit them. If it’s run by “good Sociopaths” (whom I’ve taken to calling Technocrats) then it will be a good organization. However, if it’s run by the bad kind, it will degenerate. So, with the so-called Sociopaths (while it is less necessary for the Losers and Clueless) it is important to understand the moral composition of that set.

I’ve put a lot of effort into defining good and evil, and that’s a big topic I don’t have much room for, so let me be brief on them. Good is motivated by concerns like compassion, social justice, honesty, and virtue. Evil is militant localism or selfishness. In an organizational context, or from a perspective of individual fitness, both are maladaptive when taken to the extreme. Extreme good is self-sacrifice and martyrdom that tends to take a person out of the gene pool, and certainly isn’t good for the bottom line; extreme evil is perverse sadism that actually gets in a person’s way, as opposed to the moderate psychopathy of corporate criminals.

Law and chaos are the extremes of a civil spectrum, which I cribbed from AD&D. Lawful people have faith in institutions and chaotic people tend to distrust them. Lawful good sees institutions as tending to be more just and fair than individual people; chaotic good finds them to be corrupt. Lawful neutrality sees institutions as being efficient and respectable; chaotic neutrality finds them inefficient and deserving of destruction. Lawful evil sees institutions as a magnifier of strength and admires their power; chaotic evil sees them as obstructions that get in the way of raw, human dominance. 

Morality and civil bias, in people, seem to be orthogonal. In the AD&D system, each spectrum has three levels, producing 9 alignments. I focused on the careers of each here. In reality, though, there’s a continuous spectrum. For now, I’m just going to assume a Gaussian distribution, mean 0 and standard deviation 1, with the two dimensions being uncorrelated.

MacLeod Losers tend to be civilly neutral, and Clueless tend to be lawful; but MacLeod Sociopaths come from all over the map. Why? To understand that, we need to focus on a concept that I call well-adjustment. To start, humans don’t actually value extremes in goodness or in law. Extreme good leads to martyrdom, and most people who are more than 3 standard deviations of good are taken to be neurotic narcissists, rather than being admired. Extremely lawful people tend to be rigid, conformist, and are therefore not much liked either. I contend that there’s a point of maximum well-adjustment that represents what our society says people are supposed to be. I’d put it somewhere in the ballpark of 1 standard deviation of good, and 1 of law, or the point (1, 1). If we use +x to represent law, –x to represent chaos, +y to represent good, and –y to represent evil, we get the well-adjustment formula:

Here, low f means that one is more well-adjusted. It’s better to be good than evil, and to be lawful than chaotic, but it’s best to be at (1, 1) exactly. But wait! Is there really a difference between (1, 1) and (0, 0)? Or between (5, 5) and (5, 6)? Not really, I don’t think. Well-adjustment tends to be a binary relationship, so I’m going to put f through a logistic transform where 0.0 means total ill-adjustment at 1.0 means well-adjustment. Middling values represent a “fringe” of people who will be well-adjusted in some circumstances but fail, socially speaking, in others. Based on my experience, I’d guess that this:

is a good estimate. If your squared distance from the point of maximal well-adjustment is less than 4, you’re good. If it’s more than 8, you’re probably ill-adjusted– too good, too evil, too lawful, or too chaotic. What gives us, in the 2-D moral/civil space, is a well-adjustment function looking exactly like this:

whose contours look like this:

Now, I don’t know whether the actual well-adjustment function that drives human social behavior has such a perfect circular shape. I doubt it does. It’s probably some kind of contiguous oval, though. The white part is a plateau of high (near 1.0) social adjustment. People in this space tend to get along with everyone. Or, if they have social problems, it has little to do with their moral or civil alignments, which are socially acceptable. The red outside is a deep sea (near 0.0) of social maladjustment. It turns out that if you’re 2 standard deviations of evil and of chaos, you have a hard time making friends.

In other words, we have a social adjustment function that’s almost binary, but there’s a really interesting circular fringe that produces well-adjustment values between 0.1 and 0.9. Why would that be important? Because that’s where the MacLeod Sociopaths comes from.

Well-adjusted people don’t rise in organizations. Why? Because organizations know exactly how to make it so that well-adjusted, normal people don’t mind being at the bottom, and will slightly prefer it if that’s where the organization thinks they belong. It’s like Brave New World, where the lower castes (e.g. Gammas) are convinced that they are happiest where they are. If you’re on that white plateau of well-adjustment, you’ll probably never be fired. You’ll always have friends wherever you go. You can get comfortable as a MacLeod Loser, or maybe Clueless. You don’t worry. You don’t feel a strong need to rise quickly in an orgnaization.

Of course, the extremely ill-adjusted people in the red don’t rise either. That should not surprise anyone. Unless they become very good at hiding their alignments, they are too dysfunctional to have a shot in social organizations like a modern corporation. To put it bluntly, no one likes them.

However, let’s say that a Technocrat has 1.25 standard deviations of law and chaos each, making her well-adjustment level 0.65. She’s clearly in that fringe category. What does this mean? It means that she’ll be socially acceptable in about 65% of all contexts. The MacLeod Loser career isn’t an option for her. She might get along with one set of managers and co-workers, but as they change, things may turn against her. Over time, something will break. This gives her a natural up-or-out impetus. If she doesn’t keep learning new things and advancing her career, she could be hosed. She’s liked by more people than dislike her, but she can’t rely on being well-liked as it were a given.

It’s people on the fringe who tend to rise to the top of, and run, organizations, because they can never get cozy on the bottom. We can graph “fringeness”, measured as the magnitude of the slope (derivative) of the well-adjustment function and you get contours like this:

It’s a ring-shaped fringe. Nothing too surprising. The perfection of the circular ring is, of course, an artifact of the model. I don’t know if it’s this neat in the real world, but the idea there is correct. Now, here’s where things get interesting. What does that picture tell us? Not that much aside from what we already know: the most ambitious (and, eventually, most successful) people in an organization will be those who are not so close to the “point of maximal well-adjustment” to get along in any context, but not so far from it as to be rejected out of hand.

But how does this give us the observed battle royale between chaotic good and lawful evil? Up there, it just looks like a circle. 

Okay, so we see the point (3, 3) in that circular band. How common is it for someone to be 3 standard deviations of lawful and 3 standard deviations of good? Not common at all. 3-sigma events are rare (about 1 in 740) so a person who was 3 deviations from the norm in both would be 1-in-548,000– a true rarity. Let’s multiply this “fringeness” function we’ve graphed by the (Gaussian) population density at each point.

That’s what the fringe, weighted by population density, looks like. There’s a lack of presence of people at positions like (3, 3) because there’s almost no one there. There’s a clear crescent “C” shape and it contains a disproportionate share of two kinds of people. It has a lot of lawful evil in the bottom right, and a lot of chaotic good in the top left, in addition to some neutral “swing players” who will tend to side (with unity in their group) with one or the other. How they swing tends to determine the moral character of an organization. If they side with the chaotic good, then they’ll create a company like Valve. If they side with lawful evil, you get the typical MacLeod process.

That’s the theoretical reason why organizations come down to an apocalyptic battle between chaotic good (Technocrats) and lawful evil (corrosive Sociopaths, in the MacLeod process). How does this usually play out? Well, we know what lawful evil does. It uses the credibility black market to gain power in the organization. How should chaotic good fight against this? It seems that convexity plays to our advantage, insofar as the MacLeod process can no longer be afforded. In the long term, the firm can only survive if people like us (chaotic good) win. How do we turn that into victory in the short term?

So what’s a Technocrat to do? And how can a company be built to prevent it from undergoing MacLeod corrosion? What’s missing in the self-executive and guild cultures that a 5th “new” type of culture might be able to fix? That’s where I intend to go next.

Take a break, breathe a little. I’ll be back in about a week to Solve It.

Gervais / MacLeod 20: Bozo bit vs. simple trust

We’re almost finished. There’s a lot that I might add to this series in the future, and I’m seriously considering the book idea, but this series has taken a lot of time and work, and there are some technical projects I want to start in the spring. Here’s the roadmap I see for the next three posts:

  • Part 20: This one. Flesh out what simple trust is, why it isn’t so simple, and how to defend it. 
  • Part 21: Review the economics of the problem (convexity, technological-era needs) so we can define what “Solving It” looks like.
  • Part 22: The final Solve It post, where I discuss what I think is the missing piece in all of this.

To recap where we’ve been, I’ve spent so much time on trust. Here’s what I’ve covered on the topic this far:

  • Part 17: Financial trust. Do investors and employees feel that the compensation structure is fair to everyone? I advocated a structure that encourages transparency, that does not disallow but limits “HR expedient” overcompensation, and avoids a lot of the problems with traditional equity compensation (e.g. liquidation preferences, disalignment of incentives). 
  • Part 18: Industrial trust. Are people trusted in their use of time? Since the productive value of time can scale either progressively or haphazardly with planning, the idea is to create a culture of intelligent (self-executive) time management and self-improvement. That delivers more yield (and is more scalable) than traditional micromanagement.
  • Part 19: Why dishonesty is so common in organizations. As trust sparsity set, people need to make unrealistic or highly contingent promises in order to get anything accomplished, but this “convex” deception is simply taken to prove the need for further distrust of employees. Also, it creeps; what starts as “stone soup” (mutually beneficial convex deception) evolves into fraud.

These issues are critically important. The MacLeod degeneracy is an artifact of trust sparsity and dishonest risk transfers. If we want to avoid it, we have to learn how to build trust-dense organizations. That’s an incredibly hard thing to do.

There seem to be three ways humans have been able to get large projects done; call them “sources of power”. The first is coercion: someone with a weapon or military backing decides that work will be done and forces people to do it. That’s mostly illegal, and good riddance to it. The second is divination. You don’t have to believe in supernatural beings for it to work; random chance is just fine. Aaron Brown, in The Poker Face of Wall Street, makes a good case for gambling as having had a necessary function on the American frontier. It allowed poor but ambitious people to form pools of capital that could be used to finance large projects. It didn’t matter who the leader was, for most of these, but it needed someone. Capital inequality, for its obvious flaws and injustices, fulfills that. Self-favoring divination is a subcase called arrogation. The third is aggregation, which is how we make choose political leaders (voting) and valuate companies (markets) and it also is supposed to determine how companies run themselves, at least at the upper levels. Aggregation involves voluntary action (unlike coercions) and decisions that must be made with defensible reasons (unlike divination) and that requires trust, both in the processes and the people.

When there isn’t trust and aggregation fails, people tend to default to one of the other two sources of power. Ineffective managers fall back on coercion, while convex fraud can be viewed as an attempt to create a divination process that favors oneself (arrogation). No one would disagree that this stuff is socially and emotionally toxic. Is it, however, unprofitable? In the technological economy, wherein the relationship between input (morale, effort, skill, and talent) and output (economic value rendered) is convex, the answer’s a resounding “yes”. One of the traits of convexity is that small differences in conditions produce large variations in output, and distrusting employees doesn’t just shave a little off the top, as it would in a concave world. It hamstrings them, and everyone loses.

What is simple trust?

Trust is a multifaceted and complicated trait of a relationship. There are degrees of it. There’s a lower bar to trust someone with $10 than with $10,000,000. Likewise, there are people whom I’d trust to do the right thing on big issues, but wouldn’t leave small stuff (that they might ignore) to them. I’d rather do it myself. Additionally, there’s the matter of domain-specific competence. I’m not a doctor, so if you trust me to perform plastic surgery on you, then you’re making a mistake. This is pretty complex and specialized. I have good news: we don’t have to peer into that stuff. Simple trust is, well, much simpler. Is this person trusted to be decent and intelligent or, to use the business lingo, “credible”? Simple trust doesn’t require a foolhardy faith that someone will get everything right, but it assumes that he will try to do his best, communicate shortfalls, and accept feedback.

Simple trust is binary

What’s important about simple trust is that it’s binary. For the larger topic of trust, there are degrees and nuances. Simple trust either exists or it doesn’t. Here’s a barometer for simple trust: would you give this person an introduction? Would you trust him to give you a fair recommendation? There’s a quick, emotional “yes” or “no” that comes forth in, at most, a couple hundred milliseconds. “No, not that idiot!” “Yes, of course! Great guy!” That’s where simple trust lives. More complex nuances of audit structure, project/person fit, and contractual provisions– all of those being more ratiocinative and deliberate– come later.

In software, we refer to simple trust as the “bozo bit”. If the bozo bit is off, then a person is going to be held to have valuable input. That doesn’t mean that his ideas will never be rejected, but they will be heard and considered. He’ll have influence, if not power. When the bozo bit’s on, that person’s input is ignored and he’ll be viewed as a source of trouble and potential failure. In management culture, the exact same dynamic has been given the name “flipping the switch”. That’s when a report is prematurely judged to be incompetent and therefore ignored wholesale. When a manager flips the switch, the employee becomes A Problem, even if there’s no objective reason to view him that way.

Simple trust is symmetric

In general, simple trust is symmetric. People size each other up and generally figure out, on a subconscious level, where they stand on this matter. There are some cases of cluelessness that enable asymmetry, but they’re rare. That’s why a bad reference is so damaging to a person’s job search. It means that a candidate put simple trust in someone who thought she was an idiot. Either no one likes her and she’s back into a corner, or she’s a terrible judge of character. This is different from other emotional affinities or revulsions. For example, there are probably people I like who don’t like me, and vice versa. Admiration driven by social status is asymmetric, almost by definition. More nuanced varieties of trust (driven by information specific to the parties) can be asymmetric. Simple trust rarely is. It’s reciprocal.

When I was involved in recruiting, I noticed that even though most candidates had nothing to hide, people hated giving direct managers for references. Actually, I think this fear is often misplaced. Managers are trained in not-getting-sued so bad references from them are rare– they tend to fade to neutrality from both sides– and variance-reduction in the reference-checking process is generally desirable, since it only occurs when the decision to hire is nearly made (and will only be unmade by a surprisingly bad finding). Still, it’s understandable that people wouldn’t want ex-bosses in their continuing careers. Managerial authority tends to make simple trust impossible. That’s why the relationship is so awkward. The manager’s job is to look for causes of potential failure, especially in people. Fault-finding is her responsibility, as most companies define it. That, I would say, is the antithesis of simple trust. When there’s simple trust, you size people up for upside capability because you assume they’re not going to fail you in bad faith. When there isn’t, you start looking for breaking points: how far would this person need to be pushed before he became useless?

Why is simple trust symmetric? Because the lack of it is a negation of the person. Most people understand and respect that no one is going to enter a $20-million deal with them on their word alone, but when there’s a lack of simple trust, the relationship becomes adversarial. The absence of simple trust becomes fight or flight. Most of the corporate pyrotechnics we know and love come from the cases where “flight” is rendered impossible.

Simple trust is systemic

In the general and more complex sense, trust is not (nor should it be) transitive. Trusting someone does not mean trusting everyone she trusts. An ungodly number of parties have been ruined by invited people inviting people… who invite other people. Simple trust generally tries to be transitive. If Alice trusts Bob, and Bob trusts Eve, it’s unlikely that Alice will view Eve as a complete waste of her time. Alice’s bozo bit will probably be 90% of the way to “off” based on Bob’s recommendation alone. That’s why introductions are so important as a social currency; they are the process of simple trust transitivity.

If a relationship is (mostly) symmetric and transitive, then it will tend to coalesce into graph-theoretic cliques (clusters of nodes where each pair is directly connected) and those are often realized in human social cliques. Within the clique, there’s simple trust between all the members. They may not care for each other personally, nor agree very often, but they respect each other enough to work together. Outside of such cliques, simple trust is uncommon. This encourages the formation of an “us” and a “them”. How big can these trust-dense cliques get? No one knows, for sure. They can scale to hundreds of people, but that’s rare. Typically, simple trust among an unstructured set of people will congeal into a state where there are many small, trust-dense cliques amid an overwhelmingly sparse general graph. That’s what we see in high schools and prisons, and it seems to be a fairly natural human state in the absence of effective leadership.

Within a group, there will be a general character of trust sparsity or density, and it’s a binary systemic property. That is, a trust-sparse company will be sparse almost everywhere, while a trust-dense company will be uniformly dense, which is experienced in a sense that “everything just works”. Trust-dense environments may have an occasional “broken link”, but they’re often self-repairing and can adapt: those two people just won’t be made to work together if it can be avoided. Trust-sparse environments tend to generate small cliques that view each other with suspicion and, sometimes, outright adversity. Managers in trust-sparse companies tend to prize the “unicorn” who can become trusted in multiple cliques (because that’s so rare) who might be able to merge them into a larger coalition, but usually such a person is thrown out by one of the two.

This can often be understood as a sort of organizational self-efficacy. When a company is in a trust-dense mode, there’s an understanding that work done within those walls will, most often, be done by capable people who know what they’re doing. People, then, can specialize in what they do best and hand over work that is better done somewhere else. In trust sparsity, companies get a “none of our shit is any good” attitude that leads to duplicated effort, communication failures, and waste. That’s the point at which companies find themselves buying modestly talented software engineers at $5-million-per-head panic-priced “acqui-hires”; they have much cheaper talent in-house, but can’t find it because of the sense that the company is full of bozos and good people are so rare that it isn’t worth the effort to discover them.

Trust, teamwork, and teamism

Theories X and Y represent the extremes of the trust graphs. Theory X has everyone as an isolated point in a Hobbesian wilderness. Theory Y attempts to realize the complete graph: that is, the organization is a single, trust-dense, clique with everyone in it. That works extremely well, until a few “bad apples” take too much advantage. Theory X was the old, original-sin management style inspired by millennia of labor reliant on coercion rather than voluntary aggregation. Theory Y emerged around 1920 with Henry Ford’s model of paternalistic capitalism, and was destroyed in the “greed is good” 1980s when cocaine-fueled yuppies sold their employers’ secrets to get private equity jobs. Theory Y trusted employees too far, because it didn’t account for the return of pre-1945 economic inequality and the higher stakes of the post-Reagan world. What we have now, for the most part, is Theory Z teamist management that acknowledges, but attempts to control, the natural tendency of humans to form trust-dense cliques amid prevailing trust sparsity. It’s closer to Theory X than Y, but it acknowledges the usefulness of team cohesion.

In the Theory Z world, everything is a team. The rent-seekers call themselves “the management team” or (get your vomit bag out) “the leadership team” in official documentation. There’s even a “Termination Approval Committee” in many companies– a firing team. There are project teams and inter-team “special teams” and within-team subteams. Finally, if you paid attention in your HR-mandated management classes and know you can’t call someone a “retard” or “fag” in his or her performance review, you say “not a team player” instead. They both mean the exact same thing: I don’t like that person and will say the nastiest thing I can get away with. It’s just that “not a team player” has a more conservative and HR-appropriate sense of what one can get away with.

There are two problems with Theory Z. One is that teamism often tends toward mediocrity. I might seem “pro-convexity” with my technological-era cheerleading, but concavity has a major virtue, which is that it favors equality. Let’s say that you have six “points” of resources to give three employees and their productivity will be as follows:

| Input | A Payoff | B Payoff |
+-------+----------+----------+
|     4 |      125 |      600 |
|     3 |      120 |      250 |
|     2 |      100 |      100 |
|     1 |       60 |       25 |
|     0 |        0 |        0 |
+-------+----------+----------+

For A, the concave task, the optimal allocation of the 6 points is equality: give 2 points to each, producing 300 points. That’s the “team player” world in which it’s better to get mediocre exertion from all team members. However, for B, the optimal arrangement is to give one employee (the “favorite”) 4 points, with the second (the “backup”) getting 2 points, and a third (the “loser”) getting none, which produces the maximum payoff of 700. It doesn’t even matter, in this examples, who’s more talented. Convexity just naturally favors inequality in investment. That’s why the second and often dishonest source of power (divination) worked for the convex process of business formation before there was modern finance (aggregation). It just needed to pick a winner.

Does all of this mean that teamwork is completely antiquated in the convex world, and we should just find ways to delegate convex work to highly-promising individuals, and starve (i.e., fire) the rest? Absolutely not. The assumption in the above model is of an additive relationship where the value produced by the business is the sum of individual productivities, and that’s not a good assumption. Work is only of value if it provides benefit to other people and that generally means that employees who only value their individual productivity become detrimental. The best programmers aren’t the ones who commit the most lines of code, as if it were some commodity product, but the multipliers who make the whole team better by writing good code, teaching people how to use assets created, and generally remove obstacles. Thus, teamwork is not antiquated. Far from it.

The input/output relationship for well-structured team endeavors is also convex, so let’s return to the “B Payoff” function for convenience, and assume we have 4 players who can each contribute 1 point of effort to a project. If they each build their own thing in isolation, each puts forth a yield of 25 points, for a total of 100. None of them accomplished much, working alone. On the other hand, if they work together and put a solid, focused 4-point effort, the yield is 600. This sort of team synergy is real and, when it happens, it’s powerful.

What’s wrong with Theory-Z, closed-allocation, teamism? It’s not well-structured. It’s not self-organizing. It almost never produces team convexity (synergy). Why? The fundamental flaw of closed allocation is the conflict of interest between project leadership and “people management”. The closed-allocation middle manager is held responsible for the success of a project, while also auditing the work and career progress of a heterogeneous set of people, whose best move might be to another project. When there are conflicts, which side wins? The project has buy-in from upper management, or there wouldn’t be someone assigned to run it. The people have no vote, given the manager’s ability to unilaterally zero their credibility.

In a closed-allocation environment, the only thing that people on a “team” share is that they report to the same manager. They’ve been dropped by fate in the same place. Nothing more. They didn’t choose each other, they probably didn’t choose the work, and they have very different career goals. Part of the corporate Lie is that they’ll subordinate their agendas to the organizational mission, with the defectors either being vacuumed up into upper management (MacLeod Sociopath) or flushed out of the company, but only a few (MacLeod Clueless) really buy into that. Most just fake it to get by. What does all of this mean? It means that they’re not really a team.

Genuine team synergies can usually only be discovered at the grass-roots level. Corralling a set of people together and saying “Be a team, now!” doesn’t often work. Moreover, what is the only thing this “team” has in common? Their manager, who is trying to build a team while ensuring that he remains its leader. That’s a real problem. The most valuable group members are people who will lead if needed, but don’t mandate that they fill the top position– people who care more about executing the right ideas than executing their ideas. When someone tries to form a social group but keep its existence contingent on his inflexible superiority, the others don’t like it. The only thing that can become common between them, in this case, is their dislike for him. 

One trait of Theory-Z teamism is that managers don’t have perfect unilateral credibility. Under Theory X, labor is material that can be flushed out if judged defective for any reason, so managers do have total credibility. Theory Y softens managerial power considerably by granting implicit credibility to all employees. Theory Z is closer to X than Y, but it has introduced “360-degree reviews”, meaning that while a single individual cannot overthrow a manager– HR will side with the boss, and she’ll get fired for even trying– but a whole team, if they work together and tell the exact same story, can. Knowing this, the last thing a typical corporate manager wants is for his reports to form a genuine team. Isolation and division are necessary to keep the manager’s job secure.

The need for trust density

Convexity requires attention to variables such as project/person fit and team synergy that simply did not matter in the industrial era of individually concave labor. Morale, team synergy, and motivation used to be “nice to haves” that often weren’t especially necessary. They were rarely judged to be worth their cost. With concave labor, productivity can’t be improved very much over the typically achieved state, so management’s goal becomes to get things done cheaper, not better.

Concave human labor is going extinct. Model a task’s payoff as M * p(q), where M represents the maximum potential yield, p is a logistic “performance function” between 0 and 1, and q is a measure of inputs (skill, effort, talent, motivation) that we’ll leave abstract but assume to be measurable. With concave work, p(q) is greater than 0.5 for typical values of q. This means that we know what M is and, for predictable economic reasons, it’s usually low. We can define “perfect work” and management’s job is to track and reduce error. If the average error rate is 5% (that is, p(q) = 0.95) then management might warn the people above 10% and fire the ones above 15%. Concavity usually means that we can define perfect work and specify it, and thus we can usually have machines do it with p(q) very close to 1 and costs extremely low. If the technology to do so doesn’t exist now, some machine learning researcher is out there working on it. Driving is just one example of concave labor that will almost certainly be automated in the next couple of decades. Much of medicine will be automated in the same way. There will, probably as long as there are humans, be people called “doctors” whose job it is to understand, monitor, and communicate the purposes of the machines– that work will be always be convex– but I doubt that we’ll have physical surgeons in 2250.

There is, however, a different class of labor where M is unknown, because almost no one has achieved a level near it, and “low” values of p (below 0.5, and possibly well below it) are acceptable. For example, getting a 1% share of a $10 billion dollar market is no small achievement. Generally, for hard jobs where p(q) is low for observed values of q, M is going to be unknown, but very high. Since the logistic function p is exponential at p(q) << 0.5, the result is an input/output curve that is, over all relevant values of q, an exponential growth function: highly convex. There is a “levelling off” point somewhere where the exponential behavior stops and saturation sets in, but that’s tomorrow’s problem if p(q) << 0.5. The essence of convex labor is that no one can specify what “perfect completion” is, because the maximum potential hasn’t been found yet.

Cost- and risk-reducing management strategies generally fail over convex labor, because of their effects on morale and communication, to say nothing of the attrition of talent as the most capable leave for other jobs. In the individually concave, industrial-era, world those aspects of work were treated as “marginal” factors that have small effects on q. For concave work, these “soft factors” were subordinate to the hard concerns of cost minimization. However, for convex labor, small margins in q matter. That is the nature of exponential functions: small margins in input produce large variances in output.

Trust sparsity is devastating when an organization is engaged in convex work, because people who do not feel trusted by management, and who do not believe their bosses will be fair to them, are never going to perform at the highest levels. That’s why trust-sparsity fails amid convexity in individual work. You don’t lose 5% (as you would, for concave work) when you treat employees like pure subordinates; you’re lucky if you get 5%. Moreover, it also falls down when it comes to synergistic team-building. With trust-sparsity, communication problems are commonplace and formation of appropriate teams to solve complex projects is not possible. That’s the moral of the “stone soup” parable discussed in Part 19: the trust-sparse and famished village is saved by outsiders who use a nonexistent delicacy to create the trust density that enables them to make a genuine meal. In a trust-sparse environment, the only way to get team convexity is to use deception.

The original formulation of the MacLeod process is that it’s a financial risk transfer between entrepreneurial, hard-working Sociopaths and risk-averse Losers, with the Clueless brought in to intercede and hold up a veil (Effort Thermocline) once the Sociopaths become complacent. I’m not sure that that’s quite right. If nothing else, an honest risk transfer (in which the party exposed to less risk enjoys less reward) is certainly not sociopathic. It’s how finance works. Rather, I think the infusion of bad Sociopaths (true psychopaths, as a psychologist would define it) comes later. Trust sparsity and clique formation mean that people need “protection” because isolated points (single-node cliques) are going to be flushed away. Personal trust is hard for a dysfunctional corporation to encourage, so they replace it with an impersonal, official organizational trust– credibility– in the form of job titles, project allocations, and special awards. Even though credibility’s supposed to be strictly a reward for performance, an internal market for that forms. The vulnerable start panic trading and become MacLeod Losers, and the ones who put themselves on the other side of those trades become an increasingly powerful (and credible) Sociopath tier. Useful idiots who still believe in credibility and Santa Claus become the Clueless. That is exactly how an organization is handed over from “good Sociopaths” (Technocrats) to the bad ones. It all starts when simple trust disappears and, as I said, it’s a systemic and binary property.

Your organization is probably failing

Simple trust does not mean “trust everyone, all the time, on everything”. Database servers should be backed up. It also doesn’t mean one should put oneself in a situation where a single “bad apple” can ruin the company. That’s where Theory Y went wrong. It was too liberal with trust to be practical, and the uptick in economic inequality that began in the 1980s exposed organizations to an onslaught of bad-faith actors. Trust density is extremely important, but it needs to be defended with enough sobriety that it still makes sense.

Simple trust also doesn’t mean that you trust people stupidly. People who prove themselves undeserving of trust must be fired, lest they threaten the prevailing trust density. This is why so-called Performance Improvement Plans are such a terrible idea. Once an employee has been deemed a bozo, separate. Carrying out a kangaroo court for two months while a “walking dead” employee poisons morale, just to save chump change on severance payments, is idiotic. You’re not rewarding failure when you write a severance check; you’re reaching an agreement that is mutually risk-reductive.

Rather, what simple trust means is that people who are hired are held to be competent and capable of doing good work, and generally permitted to work for the company directly. The issue with management in a trust-sparse environment is that it quickly realizes that it has the power to reduce a person’s credibility to zero. Given the inherent conflict between project and people management, this typically results in middle-management extortions that leave the company unable to evaluate projects properly and, therefore, without a strategic rudder, they generate lots of fourth-quadrant work. Technically speaking, it’s rare for a manager to be able to explicitly fire someone (too much legal risk) but a mean-spirited performance review system and competitive internal market for transfers have, in essence, the same effect.

Until about 20 years ago, it was very rare that performance reviews were part of an employee’s transfer packet. If they were, managers would give a glowing review on paper and a realistic review, verbally. Taking care of the employee’s long-term career interests showed a commitment to mutual loyalty. What changed? Enron. No, I’m not talking about the 2001 revelation of accounting fraud that made it one of the most spectacular corporate failures in history. Rather, I mean to focus on its stirling reputation in the 1990s as one of the most innovative companies in existence. Enron made tough culture an art form, and was lauded for doing so, because this was a time when even many liberal Americans thought that “lazy people” were getting too many breaks. Some of the innovations that came out of the Enron-style performance review systems were:

  • Performance reviews are numerical and forced to comply with a pre-defined distribution. It’s a zero-sum game. For one person to get a good review, another must get a bad one. Performance reviews also contain written feedback and direction, but the only thing that actually counts is “the number”. It determines compensation, promotion opportunities, and internal mobility. Because review points are a scarce resource, in-fighting and horse-trading among managers is an invisible but potent force directing one’s career. 
  • A full review history is part of the employee’s transfer packet. This is the distilled essence of toxic trust sparsity. It means that employees are no longer trusted to represent their contribution to the company when they seek internal mobility. Their personal credibility is reduced to zero; all that matters is the “objective permanent record” they carry with them.
  • “Passive firing”. Instead of an employee being fired by a direct manager (legal risk) his reputation is demolished in the review system, and he may not even be aware of it. This is how passive firing works: when the position on his project is cut, he’s technically permitted to interview for transfer, but immobile because of the smear. I don’t think that passive-firing systems reduce true legal risk, because damaging an employee’s reputation over months is much more damning than simply terminating him. I think the purpose is for the series of rejections (which are never explicitly connected to the bad reviews) to break the employee’s will and discourage the lawsuit from happening in the first place.
  • Invisible career tracking. In tough culture, it’s not enough to do well by one’s boss. One also has to have a powerful manager. What that means is that there are well-known “good” and “bad” teams to be on, and everyone important knows which are which, but an outsider considering a job can’t tell based on the work alone. No matter how hard an employee works on one of the “bad” teams, it won’t matter because he’ll never be able to move to a “good” one. He’ll be competing against people who are moving from one good team to another and whose managers had more power.
  • Welch Effect. The Welch Effect refers to the idea that the people most likely to lose their jobs are junior members of macroscopically underperforming teams (who had the least to do with said underperformance). In tough culture, that’s true, although I’d replace “underperforming teams” with “team with politically unsuccessful managers”. Often, managers who have fewer review points to give out will tend to sacrifice a team member or two in order to make decent scores available to motivate the more senior members, who become increasingly frustrated with their lack of advancement. (This loyalist effect is a major part of what turns tough cultures back into rank cultures.) The high turnover on the “bad” team, however, exacerbates its negative reputation and performance problems.

We know what happened to Enron, and there’s little doubt in my mind that the internal horse-trading surrounding credibility and performance review scores is intimately connected with its externally visible ethical problems. Tough-culture review systems generate a “truth” about personnel that is, in fact, subject to so much power-playing and dishonesty as to make a mockery of the concept. This kills off any respect for truth as an idea, and leads to a corrosion of ethics. It doesn’t always result in defrauded investors, but it’s always corrosive. Microsoft’s similar system is credited for more than a decade of mediocrity. Google has the same system, but with a brilliant twist: the “calibration scores” are secret! Google is the only company that I’ve encountered where a manager can spontaneously give positive verbal feedback and secretly black-list a report. Google has a silent problem of managers doing exactly that to keep people captive on undesirable projects.

So, look around your company. Are performance reviews part of the transfer packet? If so, you live amid trust sparsity. The company has decided that most of its employees are bozos, and it has decided to label them as such, as a favor to management. It doesn’t trust you to represent your contribution and abilities.

Solving It

Sparsity of simple trust is devastating. Because the “bozo bit” becomes a global artifact, companies can be observed to “flip the switch” as a collective. Lose it, and you’ll almost certainly lose your corporate culture. At that point, it doesn’t matter if you’re a bank or a tech company or an advertising firm. You no longer have a good place to work. Innovation and genuine teamwork will end, except in hidden corners of the company, and all you have to look forward to is a twilight of zero-sum squabbling over credibility, a behavior that will linger on until the damn thing finally dies. So trust density needs to be a pillar of the organization if you want it, ten years later, to be something worth caring about.

What usually goes wrong? In startups, it seems to be rapid growth, but I don’t think there’s an intrinsic growth limit– some magic number like “25% per year”. I think what kills the bulk of these companies is that they hire before they trust, without failing to comprehend the permanent loss that this inflicts upon them. That is true sociopathy, because it means that people are brought into the venture with the express purpose of putting them in a miserable, subordinate position. It’s also bad for almost everyone, because it creates insecurity in the rest of the labor force: being there no longer makes a person credible, so there’s a race for elevation as the company prepares for its split into two classes: the real members, and the bozos.

To hire before it trusts is not a short-term expediency. It’s the one thing that a company really can’t afford.

Gervais / MacLeod 19: Living in Truth, fighting The Lie

Yahoo recently bought Summly, a startup run by a 17-year-old, for $30 million. Since the product was shut down, it was a “talent acquisition” (or, “acq-hire”) intended to hire the team, making the list price a pure hiring bonus. This move has, predictably, generated a lot of buzz.

Let’s look at the economics of the damn thing. The information that’s coming out seems to indicate that three engineers will join Yahoo, with an 18-month commitment. Prorated over that time, that’s $6.7 million per engineer per year. Of course, Yahoo hopes that these engineers will stick around for longer than that– perhaps five years, making that price only $2 million per engineer-year. Such numbers are not atypical in the world of acq-hires. Companies routinely pay $5 million per head (40 years of salary, at market rates) just to get a team of validated talent. There are two ways to look at this. The first is to conclude that in-house engineers are getting screwed: if a relationship with an engineer (expected duration: about four years) is worth $5-10 million, doesn’t that mean they’re severely underpaying in-house talent? I think software engineers are underpaid, but on average, we’re not worth $2+ million. Some of us are, most aren’t.  The second possibility is that the engineers being hired are just far superior to Yahoo’s in-house talent. I doubt that as well. I’m sure that Yahoo has amazing software engineers making much less than $6 million per year.

What does it say that Yahoo is buying high-school-age engineers at a panic price?

Of course, some people are taking it as a sign that Yahoo doesn’t have internal talent, or can’t get it. That’s offensive, and almost certainly not the case. I’m sure that Yahoo has plenty of capable people. What acq-hires say is not that a company is so bereft of talent that it can only get it from outside, but that a company can’t recognize talent at the bottom. It’s there, but the middle-management filter is so defective that the executives have no clue what they have. This is similar in character to a hoarder. By “hoarder”, I’m not talking about people who keep receipts or silly mementos, but the pathological kind whose living quarters become filthy, dangerous, and borderline uninhabitable, and who require psychiatric help for normal living. A true hoarder will have to buy a new coat every winter because the old one, without fail, gets lost in a personal junkyard of useless stuff. Anyway, such a person’s likely to be needlessly spending $500 on winter clothing every season, because his house is such a pigsty. What he needs is already there, but inaccessible. This is the problem that rank cultures have when it comes to talent. They become so unable to spot talent at the bottom that, even though they have talent “lying around somewhere”, they can’t (or won’t) find what they have. So they have to panic-buy it in this ridiculous bubble climate. What’s this about? It’s about trust.

Yahoo is not getting hoodwinked– at least, not in this. As the executives see it, they’re buying a trusted team. Capable software engineers are worth these “absurd” amounts seen in acq-hires if they are trusted by the organization. Give a good engineer full autonomy over her work, and give her important work, and she’ll deliver millions in value. That probably applies to these guys, but it also applies to Yahoo’s stronger engineers. On the hand, if you don’t trust her, use her for fourth-quadrant work, and fail to develop that talent; then she’s worth, if you’re lucky, 2 to 5 percent of her peak potential. This is only tolerable in software because 2-5% of a competent engineer’s peak potential still exceeds the market salary.

Trust sparsity

Large rank-culture companies seem to be talent hoarders, with no exaggeration in the use of the word hoarder. They bring smart people in, because the talent is “worth having around”, especially at a wage level that is insignificant to a corporation. They let it go to waste. They build up a formless array of people at the bottom (with more rigid, but pathological and constantly changing, managerial structures to organize and stack-rank them them) and, not knowing where the talent is, tend toward prevailing trust of everyone down there. Clearly some of those nincompoops at the bottom have talent and some don’t, but it’s rarely worth it to sort through the mess in the basement. This becomes a self-fulfilling prophecy. Good engineers don’t want to work in companies that don’t trust them, so they leave; good managers don’t want crappy reports, so they quit. The end result is an all-levels flight of talent from the firm. What results is a loss of trust in all levels of the firm. Executives start assuming that their reports are all morons (the “bozo bit” defaults to the “on” position) and that the only candidates for decision-making roles are “special people” hired from outside. Workers stop believing that their managers give a damn about their career development. It’s a omnilateral breakdown of trust that is very hard to reverse.

When people stop trusting each other, they become dishonest. No, I’m not saying that a company like Yahoo is full of liars– I doubt that to be the case. However, there are degrees of honesty, and the important ones (e.g. willingness to share bad news and explore difficult realities, as opposed to merely furnishing truthful answers to simple questions) require trusting the other party with the truth. That’s the endgame of trust sparsity. It creates a world in which some degree of dishonesty is not only beneficial, but necessary for one who wishes to survive.

I mentioned, in Part 1, the social currency of credibility that is supposed to come from work performance, but that Sociopaths find other ways to get. They realize that, even if the organization wants to think that social status mirrors contribution and capability exactly, it can be tricked and “merit” can be bought on a black market. Trust sparsity exists in an organization when people tend to distrust the competence and decency of the other players. Employees doubt they’re fairly compensated or well-directed, managers distrust their reports to get their jobs done and tend to micromanage, and people are afraid to work with other teams, because the default assumption about another person in the company is that he’s an unreliable idiot. The “bozo bit” starts out in the on position (meaning people are, by default, regarded as idiots until proven otherwise).

Trust density is the opposite, in which people are generally assumed to be competent and decent. The “bozo bit” starts out in the off position, and only people who prove to be really bad are distrusted (and in a functioning organization, they’ll be let go). This matter of trust sparsity versus density seems to be a binary property of social groups. Once a company “flips the switch” to trust sparsity, it becomes impossible to get anything done without disproportionate credibility. This turns into a “permission paradox” state where the only way to get a project that would confer credibility is to have it already– unless you want to take the “fake it till you make it” strategy. That’s when MacLeod Sociopaths (who, again, are not always bad people; but invariably willing to break rules) start to take over. Bad artists borrow, good artists steal.

That’s why “why not?” cultures are superior to “why you?” cultures. MacLeod Sociopaths will just take credibility, no matter what the official culture is. If they can arrogate it silently, they do so. They’ll ask for forgiveness, not permission. The difference between good Sociopaths and bad ones is what they do with that purloined freedom. A “why you?” culture ends up relying on its Sociopaths, who are a difficult crowd to aduit.

When you have trust density, honest people are at an advantage in the environment of transparency and collaboration that it generates. Getting real work done is what people recognize. When you have trust sparsity, however, you end up with communication droughts, and it tends to be dishonest manipulators who acquire credibility and push themselves ahead.

Living in Truth, and the Lie

This is the most personal topic in the MacLeod series. The other organizations I suffer abstractly, as much as anyone else does. Those traits of organizations irritate me, and I find them perniciously inefficient, but they don’t mess up my life. This is an issue that has rocked my career (in good ways, an bad) from time to time. I care a lot about this one. I’m going to talk, for a bit, about living in truth.

When you live in truth, you decide to be consistently honest, and to assume good faith from other people (although you do not take them at literal word). You live and work as if it were a trust-dense environment, and you don’t try to hide the fact that you’re doing so. You’re honest with your manager, even if he’s not forthright with you. You inform counterparties of the risk inherent in deals you wish to make, even if it’s to your disadvantage. You don’t bullshit, and you don’t tolerate others’ nonsense either. In the classical sense of the word, it’s cynical: live virtuously and honestly, assume basic goodness in others, and oppose dishonesty. 

The modern concept (and the name) comes from Vaclav Havel, who championed “anti-political politics“. The idea is that, rather than directly opposing an overbearing political force, to live as if one were free. No violence or protest needed. Just do the right thing, anyway. This is a courageous and rare thing to do, for the obvious reason that political authority (especially under Soviet rule) can be terrifying. If one person lives in truth, he gets shot or thrown in jail (as Havel did, being imprisoned for several years). If a million people do it, society changes. The Lie’s only scalable weapon in the face of exposure is further dishonesty and, eventually, it becomes absurd and falls in on itself.

It’s actually much easier for us corporate denizens to live in truth, because we really have little to lose. What might happen? A job might end. That’s the loss of a relationship with someone who didn’t value us in the first place. We might get bad references (hire a lawyer; a well-written C&D will clear that up). Then there’s the “job hopping” stigma. Okay, that’s real, because there are a lot of imbeciles out there who are stuck in the 20th century who’ll throw out your resume for having “too many jobs”, but there are non-imbeciles out there as well. These are all serious consequences, but nothing compared to what real dissidents have faced: prison and death. So what the hell is our excuse? I’m not asking for self-immolation here, but moral courage would be nice. My experience in the corporate world has convinced me that it’s thin on the ground. People prefer the comfort of the Lie over a life in truth.

What does “truth” mean in a corporate context? It means doing the right thing, even if it hurts. It means placing value on personal health and progress, profitability of the business, and cultural integrity. It means taking responsibility for strategy at one’s appropriate scope, rather than using the following-orders defense for failure. It’s never easy, and it’s often punished. A synonym for living in truth is for an employee to be (a word I’ve used before) self-executive.

In a culture of truth, employees are self-executive and it’s assumed that they will be. Trust density dominates. I don’t intend to claim that what I’m discussing in a panacea that magically causes dishonesty to go away, but a self-executive world is one in which the honest can fight back. They have a chance. They’re informed, and it’s worthwhile for them to speak because people in power will actually listen to them. Nothing can change the fact that there are bad actors out there, and good people who work together badly. Even the best organizations will have to deal with that. But a self-executive world is one in which good people can still win.

I’ve talked about truth, and we can agree that it’s good. What does The Lie look like? Well, in typical corporations, the powerful aren’t explicitly dishonest. They’re careful not to say anything on record that is literally untrue. It’s more that they’re so opaque with information that emergent dishonesty is the norm. Valuable information is so guarded that people don’t even know if they’re doing their jobs properly, which makes it easier to mask a termination as “for performance”. Managers can claim that “there isn’t money in the budget” for a raise when that is only correct with the added context, “for you.” There’s a lot of dishonesty that opacity enables.

Corporations like The Lie because it creates an executive in-crowd. A nasty joke has been told, and the target doesn’t even notice. Sociopaths get the joke, and the Clueless butts have no idea what’s happening. MacLeod Losers would get it, but they’ve chosen not to be in the same room. The Lie is also an extremely powerful weapon. If you’re in on the Lie and have some control over its direction, you can use it to take people down. That’s why reputation economies tend to be hacked by the worst sorts of people. The Lie is very good at ruining reputations. That’s how the fucking thing fights back: it reduces the credibility of its opponents with (big surprise) deceptive half-truths, opacity, and outright lies.

In terms of corporate employment, reputation damage– in forms like immediate firings, bad references, and possibly frivolous lawsuits– is all that it has. That should establish it as very weak. Why? The Lie’s counterattack has constant total strength but a variable number of targets. It’s like a fireball spell that, if it hits one target, does enough damage to kill a demigod but, if it hits fifty, barely scratches them. If everyone fights The Lie and The Lie fights back against everyone, its weapon is so diluted as to be impotent. No one will buy into The Lie if it starts smearing more people– especially if they experience getting smeared, which is one way for a person to learn viscerally that The Lie is a lie.

Right now, people are terrified of bad references, short jobs, and public terminations. People don’t “bad mouth” unethical employers for fear of severe career repercussions. Now, I tend to agree that people who air “dirty laundry” (mistakes and embarrassments within normal bounds, that any complex entity will endure, as opposed to real ethical problems) are doing something that they shouldn’t, but some companies and executives are just deeply unethical and deserve to have their secrets blown. Right now, this sort of thing doesn’t happen until it’s far too late– investors were defrauded, employees robbed, and customers left hanging– because no one is willing to risk long-term blacklisting to do something that, while desirable to society, confers little personal yield. The Lie perpetuates itself by making truth scary for the individual who might expose it.

“Stone Soup” and convex dishonesty

Why does The Lie exist? I’m going to tackle a related question: is dishonesty always bad? With dishonesty, I’m not talking about “white lies” or inconsequential politeness or even the semi-formalistic lies (such as never disparaging an ex-employer or boss, instead saying “I was looking for new challenges”) required by decorum, but rather about willful deception of other people with the goal of altering their behavior. In other words, deception means serious sociopath stuff. So, I think it should be obvious that we’re going to fall somewhere between “always wrong” and “most often wrong”. I intend to convey that it’s the latter: it’s most often wrong, but not always. There are situations that require dishonesty.

There’s a parable, probably going back to medieval times, about a village in a deep state of famine. Each villager has plenty of produce, but they hoard food, never sharing or trading it because they distrust each other. Everyone’s malnourished; they’re probably making bad decisions, and slowly dying.

A pair of outside strangers, also hungry, comes to the village and asks for food. Slammed doors. Nothing. So they camp out in the town square, put a rock in a cauldron with some water, and start boiling it. Curious villagers, from time to time throughout the afternoon, come by to ask what they’re doing. “We’re making Stone Soup, a delicious specialty where we’re from. Would you like some?” Villagers agree to partake, and the travelers suggest that Stone Soup is even better with just a few carrots. Parsley’s good too. Rice. Chicken. Soon enough, the whole village is on it, with each contributor thinking he or she is adding just a little extra to a completed product. Of course, the stone is actually inert: “Stone Soup” is just hot water! So the stone is taken out at the end, and the soup is served to the village. Everyone gets a much healthier meal than they’ve had in months. Victory. The End.

This is a case where’s pre-existing trust sparsity within the village. They don’t share food, because they don’t believe the others will be fair to them. Instead, each eats only the one food product he or she has, and they’re all malnourished. The travelers, needing to eat, do something dishonest. Asking for food doesn’t work, so they make up a nonexistent delicacy, offer to share, and ask people for ingredients one-by-one. The result is that everyone gets a bowl of real soup. They all benefit, but it’s still dishonest. This isn’t a polite white lie or a “protocol lie” where both parties understand the truth can’t be told. It’s intentional deception with the explicit goal of altering economic behavior: legally, we call that “fraud”. Yet it’s clearly a good thing they did. This is a model case of convex dishonesty.

What’s convex dishonesty? It exists when one party gets commitments from others through dishonest means, under a situation where a small number of commitments will lead to failure but a large investment will pay off multiply (convexity). The goal, of course, is to succeed and pay everyone back.

For a less defensible example, let’s say that I have a business strategy that will require investments of $1 million from five people. If all contribute, we’ll net $21 million. I take $1 million for the execution and pay $4 million back to each of the principals. If we don’t get all five commitments, however, everything is lost. It’s very risky to go in as the first, second, third or fourth investor, because you’re betting on the whims of all the others. The fifth investor experiences no risk. A devious way to maximize my chance of success would be to tell each of the 5 participants that the other four were already committed, implying that there’s no risk. If I pull it off, everyone wins. We all get a payday. However, if one of those players can’t invest, or doesn’t trust me, then we all lose.

Why is that convex? It pertains to the input-output relationship between resources and payoff. In the example above, the payoff function is zero from 0 to 4 commitments, and $21 million at 5. That’s the “hockey stick” graph that is the epitome of convexity. Typically, a convex profile means that a mediocre commitment will result in failure (hosing the investors) while a large one will deliver outsized success. Investors are effectively betting on whether they believe others will commit, and the fraud is in convincing them that the others have.

Stone Soup has a similar profile. The value of the soup is somewhat subjective, but convexity is clearly in play. If one villager puts food in the soup, he gets screwed. He’s giving away some of his food for a “soup” that he could make at home. He’d probably be very angry. If twenty villagers participate, however, they get a food that they couldn’t have made alone.

With convex dishonesty, you’re typically generating validations (often “social proof”) to create the impression that a project is almost in a desirable state, in order to motivate people to contribute so you can get to that point. You’re selling a “vision” to get commitment before you have any way of knowing whether you’ll gather enough to deliver. I’d imagine that most startup entrepreneurs understand this intuitively: one has employees, investors, customers and press, and all are looking for progress with the other groups to see general “traction”. It might be tempting (and it’s generally quite wrong) to exaggerate one’s success in other departments– for example, to hire people at a low salary with the promise that “Series A funding will be here in two months”, or to mislead investors with inflated customer numbers (don’t do that). It’s just very hard to orchestrate a situation where that heterogeneous collection of needs and resources grow together.

Convex dishonesty isn’t always good. It’s often bad. What’s wrong with it? Well, most scams look like convex dishonesty. For one game-theoretic example, consider the “drop-our-books” prank played in junior high schools across America. The butt of the joke is told that, at a certain time, the whole class is going to engage in some disruptive behavior (such as dropping one’s book on the floor, clapping, or yelling out). If disruption of class is considered “good” (e.g. positive utility in schadenfreude against the teacher) then it’s convex. If one student does it alone, he’s embarrassed and possibly punished, so he loses. If all the students do it, they laugh at the teacher’s expense but can’t all be singled out, so the group wins. Of course, the fraudulent aspect of this is that only one person (the butt of the joke) will actually engage in the behavior. The other students laugh at his expense.

In fact, phenomena that “look like” convex dishonesty can reach extremes of evil. Ostracism is a case of that. I’m not talking about mere individual social rejection, but when a community is persuaded to reject someone entirely. Influential people in that group create the usually-fraudulent perception that no one likes the target, which compels individuals to reject that person because “everyone else” dislikes him. It’s not convex in the typical sense (there’s no clear “payoff function” with a convex shape) but it has similarities insofar as it uses dishonesty to push the community from one Nash equilibrium to a worse (at least for the rejected person) one.

Trust sparsity and convex dishonesty

If we jump back to Stone Soup for a second, we find an impressive moral message. In this contrived (but not uncommon) circumstance, deliberate deception is heroic. Amid the trust sparsity of that village, a convex deception is the only thing that can get them to work together and produce a decent meal.

I contend that these travelers are archetypal MacLeod Sociopaths. Yes, they saved the villagers’ lives. They were certainly decent enough to share the Stone Soup that they created. (A modern executive would take an 80-percent cut of the soup as a “stone fee”, giving the villagers the scraps.) They also did it for selfish reasons: they didn’t want to starve either. One can argue them to be thieves: all they brought to the soup was a worthless stone, but they got to share in the final product. Their fundamental, catalytic function, however, was to make this trust-sparse group of people work together with the lubricant of convex dishonesty: the lie that this Stone Soup had pre-existing value, and just needed a little bit more from each. Whether these outsiders were good-hearted altruists or dishonest egoists (sociopaths?) is beside the point. They were necessary.

That is what trust sparsity is about. Within a trust-sparse corporate environment, to do anything requires a certain dishonesty (also known as “social proof arbitrage”). Trust sparsity means that everyone’s default will be to look at you with the “bozo bit” on, and ignore your input. The first thing you must do– the only thing that’s important– is flip that damn switch using whatever means possible. Until you’ve done that, nothing you achieve will matter. After you’ve flipped your “bozo bit” to the off position, you can get some real work done. But if your “bozo bit” is on, the only thing you’ll be able to do is fourth-quadrant work. Get out of that mess as soon as you can. Fourth-quadrant work will stink up your career if you’re on it for too long.

What exactly am I advocating?

My message might seem muddled at this point. I railed against The Lie, but I just said that people should flip their “bozo bit” to off using “whatever means possible”. It’s actually quite altruistic to do so, because you can’t get real work done till you’ve zeroed that “bit”. Does that means I’m advocating dishonesty? Possibly so.

When you live in truth and become self-executive in the honest way, you’re taking on a major risk. You’re flipping your own “bozo bit”, and letting it be known that you expect others to do so as well. You’re refusing to be deprived of credibility, and in a visible and above-board way. Often, this means you’re arrogating more autonomy than your manager has. It’s dangerous. It can get you fired. Most people prefer the safer and subversive convex dishonesty. They’re not trying to defraud anyone, though; they fully intend to pay the villagers back.

How do The Lie, and convex dishonesty, interact with the traditional MacLeod tiers? MacLeod Losers live with the Lie. It becomes an annoying landscape feature, rather than a moral calamity, to them. Clueless tend not to know that it is a Lie. They’re the “useful idiots”. Most of the MacLeod Sociopaths have, however, risen to a level (just past the Effort Thermocline) where they’re cognizant of The Lie. It’s a like a hedge maze whose structure is evident from above, but befuddling and illegible from the inside. Sociopaths, with a reaper’s-eye-view, learn how to use The Lie.

Where are the people who oppose The Lie and live in truth? I contend that those are the natural Technocrats, and it’s telling that the original MacLeod pyramid has no place for such. I guess that such people are assumed to be flushed out, and that’s not a bad assumption. That is the fundamental evil of organizational opacity, wherein truth-tellers can be isolated, punished, and ejected. The Lie can push them out, and make itself stronger. Its opponents are either pushed out in a humiliating way (“making an example”), isolated and ejected invisibly, or silenced into non-participation. At the same time, the bad MacLeod Sociopaths learn how to mix their own power with The Lie, an alloying process that makes both stronger.

The Lie loves trust sparsity, because it makes it easy to play divide-and-conquer games against the powerless. Moreover, the only way to get any work done in a trust-sparse environment is to use convex dishonesty. It’s to counterfeit credibility (go ahead, it’s usually a bullshit currency that deserves it) as far as you can, and to live in a “why not?” culture instead of “why you?” by changing your history as much as is needed. That’s a practical necessity for most people (even good people in the desirable subslice of the Sociopath type) if they want to get any work done.

I don’t like that. I don’t like that the need for dishonesty is there. Even good lies– even full-on, obvious-after-the-fact convex dishonesty– are damaging to relationships. My advice: be cautious. Be smart. If a personal relationship is valuable to you outside of the organizational context, don’t pollute it with a lie (even a convex one). But most human organizations won’t let you do X until you’re a “real X” with 5 years of experience in a 3-year-old technology. How do you become a real X? You should just become one, through any means possible. Your decision, today. Better to fake it now than to never make it. “You don’t need to hire an X. I am the resident X. Of course I have production and leadership experience!” Never claim a specific competency that you don’t have, or promise work that you can’t fulfill, but if you need to inflate experiences to tweak perceptions in the right direction, go right ahead. Your enemies are cheating in the exact same way, and they’re much worse people, so why not? If you can afford to live in truth, do it. If you can’t, then bolster your career with enough convex lies to get permission to tackle real work. But then, because you still are a decent person, it’s on you to deliver what you promised.

Ultimately, a lot of decisions aren’t made based on merit, but on gut decisions derived from social status and “feel”. That is why the Draco Malfoy type whose family “was Ivy before George III” sees his career advance just a little bit faster than everyone else’s. It’s not that there’s a conscious decision to promote him based on irrelevant social status. It’s just how people work when trust sparsity has set in and people are waving feeble lanterns at midnight. If you can push yourself forward with just little bit of convex social-proof arbitrage, then you should. Like I said, I don’t advocate this style of deception if you want a persistent personal relationship– that slight social superiority puts one just-above-zero in a trust-sparse environment, but it’s not worth it to gain that petty sort of elevation in genuine relationships– but it’s a fine way to move about at work.

Or, you can go the other way. You can disobey the Lie. Sometimes that’s the right thing to do, as well. You can get up at 4:00 in the morning when The Lie is asleep and get to work. You can live in truth. Both, as I see it, are morally valid options for the individual.

Organizational benefits of trust density

I consider it morally acceptable for a person to use convex lies to push his career forward. Why? Because most companies put people in roles that are three levels below their frontier of ability. The assignment of fourth quadrant work that is itself dishonest. How am I justified in saying that crappy work assignments are dishonest? The truth about the junk work is that it’s evaluative. It has very low importance in the function of the business, and not much is learned in doing it. Rather, it’s just there to see if the person is “good enough” for real work, a decision that often isn’t made until he’s “paid dues” and “proven himself” in a years-long wringer of boring, unimportant work where there are high expectations of dedication and obedience to managerial authority. In my opinion, this is a terrible statement about an organization. It means that it doesn’t trust its own hiring process.

Some people (MacLeod Sociopaths) bypass all that evaluative time-wasting nonsense and put themselves on real work. This can be done by public honesty (living in truth) but that tends to entail more risk of sudden income loss than most people can tolerate. So usually, they do it in dishonest ways. They fake credentials and experience, careful never to explicitly lie, but fudging on subjectives like “production experience” and “leadership role”. They find social proof arbitrages and credibility trades and hack the system as it exists. This is good for them, but it’s bad for the organization itself.

The Lie can be seen as a waste-pile of formerly convex dishonesties that were useful to the organization at one point but are now pathological. For example, let’s say that the organization was divided on the matter of who should be CEO: John or Kara. John got the job over his more competent co-founder, Kara, because he had “investor connections” that weren’t real and never came through. However, with John as the leader, they were able to work together as a group, and other funding came in later. A convex deception! Three years later, it’s discovered that John’s claim to the CEO job was utterly false. The company can either fire him or (often, the more expedient choice) assimilate his lies by changing the story.

I tend to think of organizational opacity as a core aspect of The Lie, rather than something that just enables it. For example, companies always claim that compensation is fair, but keep specifics extremely murky so that no one can really audit them. The reason they do this, of course, is so they can be unfair when it’s expedient. If they’re desperate for talent, they’ll go up by 20% without raising salaries across the board. In truth, the culture of opacity and hierarchy that companies create surrounding compensation, division of labor, performance evaluation, and pretty much anything else that matters, is all there to enable expedient lies. Those errors are supposed to cancel out over time, but MacLeod Sociopaths find ways to turn such errors into a true currency that they can trade and invest for profit. As they do so, The Lie invisibly gains strength. Virtually no one intends to build up The Lie, because almost everyone is acting out of self-interest only. It happens day by day. When compensation becomes unfair and information becomes asymmetric, The Lie gets stronger. When internal headcount limitations are put in place, and closed allocation sets in, The Lie gets much stronger.

Most convex dishonesties are “good lies” of the Stone Soup variety. People are embellishing credentials to counterfeit credibility and therefore be permitted to do real work that’s of benefit to them and the organization. Those convex lies generally don’t contribute to The Lie directly. In fact, these are people fighting against The Lie, by subverting its attempts to disempower them. Unfortunately, they’re often indirectly responsible for feeding The Lie. When a convex lie fails (i.e. the payoff is never realized, and the lied-to parties get burned) people become, justifiably, angry. They bought into a party with counterfeit credibility, and lost. This validates credibility’s necessity! (What happens when people with real credibility fail to deliver? Credibility is defined more conservatively, and the environment becomes more trust-sparse and dysfunctional.) The Lie becomes stronger. Those who are aware of The Lie being a lie are never fully comfortable with it, but they prefer the static falseness of The Lie over the chaos of unknown truth values. This gets to one of The Lie’s stabilizing social purposes. It does try to wipe out Truth, and with a vengeance. It fights that with the most ardor imaginable, because that’s an existential struggle. The Lie fights truth hardest of all, but The Lie also fights other lies, and that’s why people tolerate it.

One of the easiest ways to make enemies to counterfeit some social status currency (or credibility). That’ll piss off both sides, on the matter of how people feel about that currency. People who buy into it become enemies, in defense of what was just diluted by an attempt at counterfeit. People who oppose that currency despise the counterfeiter with equal fervor because the fakery validates it. So when people feign credibility for a convex deception and fail, they’re a common enemy for everyone. That’s good for The Lie. The Lie loves common enemies, and if those enemies are liars, it can make itself look truthful or, at the least, “credible” (there’s that concept again). That’s because people tend to assume false dichotomies on a variety of moral issues, creating “sides” that lead to wrong conclusions.

I’ve opined on moral alignment and noted that, while good always treats other forms of good with basic respect– there can be disagreement and debate, but not malicious harm– there is no such convenant among evil. Good respects good as inherently valuable. Evil does not respect other evil; it only values strength. This gives good a certain unifying strength: a more cohesive, visibly altruistic, message. While good people often argue endlessly about tactical concerns, they’re all “on the same side”. That leads to a misperception that there’s an “evil side”. There isn’t. Evil fights good and evil. It lacks that cohesiveness. What is it, then, that makes evil strong, with enough power to oppose good with almost equal force? Most people aren’t “aligned with” good or evil. They’re in the weak, indecisive middle. Evil is more willing to recruit them. Good wants to recruit people honestly, and treat them as equals. That doesn’t “scale” into the moral middle classes. Evil is much more comfortable with recruiting them as inferiors and with dishonesty. One time-honored recruiting tactic, for evil, is to choose some powerless (or nonexistent) subsector of evil and punish it brutally, thus appearing to weak souls to be an anti-evil force, thus good. The Lie works in a similar way. I don’t mean to imply that typical status-inflating convex lies are evil, but most people find them to be unethical. When The Lie smashes a caught liar on the rock, it persuades the weak-minded (often, disproportionately represented in the Clueless ranks that are an organization’s muscle) that it stands for what is (if clearly not truthful) ethical, at the least. Of course, that’s a lie on it’s own.

That is the fundamental problem with convex dishonesty. It’s sometimes expedient, and sometimes a person’s career needs it, but over time it strengthens The Lie (one of whose sources of power is a fear of status-inflators and subversives; being one justifies it). When you run a convex fraud, you’re borrowing credibility on fraudulent terms (stealing) even though you have the (morally good) intent of paying everyone back multiply and making your creditors more than whole. The problem is that if you succeed, you validate that credibility currency that you stole, strengthening The Lie. If you fail, you give the Lie and the useful idiots a common enemy in you, also strengthening The Lie. I won’t call convex dishonesty unacceptable as a means of corporate survival and self-advancement, because it’s often just necessary in a trust-sparse environment, but it is corrosive to organizations. One way or another, this class of dishonesty strengthens The Lie.

An organization that wants to be healthy can’t tolerate The Lie. It needs to kill it at root. If it’s going to avoid generating one, it needs to create a trust-dense environment where the “bozo bit” is always off. There’s no alternative, because when trust sparsity is in effect, the only people who can succeed (and acquire credibility in the pseudo-meritocracy) are those willing to partake in convex dishonesty. This generates an undesirable selection pattern in which organizational success favors convex dishonesty, which evolves into all-out dishonesty. Over enough time, this moves away from the good-faith, “team-building” convex deception and toward outright “cooking the books”.

Solving It

This is why trust is so damned important, but trust is hard to manage at scale. You might trust your friends, but do you trust their friends? At some point, the warm-fuzzy social currency of trust needs to give way to structure. You actually need to go into the painstaking process of formalizing social contracts. If you’re running a company, what does the Employee Bill of Rights look like? You don’t need one at 8 people; you certainly do need one at 300. You need to set minimum trust, by which I mean giving employees enough basic credibility that they don’t need to perpetrate convex lies to grow and take risks. You also need to set maximum trust, both to crack down on the proto-managerial thugs who’d abuse the power vacuums left by formal management’s fundamental decency to extort others into supporting their career goals, and to give meaning to the minimum trust offered. (If people are “boundlessly trusted”, that just means you’ve been lazy and will rule ad-hoc, because the concept makes no sense.) There’s work to do on where to set the posts, and while I think it’s obvious where I stand (trust people with their own time; distrust those who attempt to control others) I will flesh that out further in further installments.

In Part 17, I discussed financial trust and the use of extreme transparency to ensure investors, employees, and management that everyone’s being compensated fairly. In Part 18, I discussed industrial trust– do you trust your employees to get the work done, and to do it well?– and how it requires not micromanagement but a self-executive focus on driving toward Progressive Time. Now I’ve discussed the forces that conspire against trust. People either need or think they need convex dishonesty to get things accomplished. Organizations compensate by creating an internal social currency called “credibility”, which evolves its own pile of lies that become The Lie. The Lie generates trust sparsity as its beneficiaries fight for its upkeep, and the organizational self-loathing and dysfunction that come out of trust sparsity generate more convex dishonesty to overcome an increasingly strong Lie. The alternative is to Live in Truth– to name The Lie and stand in opposition to it. Individually, this is dangerous and impotent: you lose credibility, become “disgruntled guy”, then “fired guy“. Collectively, it’s powerful. If The Lie cannot discredit the group as a whole, it falls to pieces. Organizations, however, shouldn’t wait for whistleblowers to call them out. Reliance on individual heroism is not a good strategy, but shows the absence of such. If you want a healthy organizational culture, you have to fight The Lie proactively. Living in Truth must be a central pillar of the culture.