Ambition tournament (more like a large game) in San Francisco on March 23

This post pertains to the Ambition tournament planned for the eve of Clojure/West.

What?:We’ll be playing the card game, Ambition (follow link for rules) using a modified format capable of handling any number of players (from 4 to ∞).

When? Sunday, March 23. Start time will be 5:17pm (17:17) Pacific time. I’ll be going over the rules in person at 5:00pm. You should plan to show up at 5:00 if you have any questions or need clarification, since I’d like to start play on time. 

How long? About 2.5 to 3.5 hours at expected size (6-8 people). Less with fewer people, not much more with a larger group (because play can occur in parallel).

Where? At or near the Palace Hotel in San Francisco. I’ll tweet a specific location that afternoon, around and probably not later than 4:30.

Who? Based on expressed interest, I’d guess somewhere between 5 and 10 people. If only 4, we’ll play a regular game. If fewer, we may cancel and reschedule.

Is there food? We’ll break around 7:00pm and figure something out.

Do I need prior experience with Ambition? Absolutely not. There will be more new players than experienced people. Ambition is more skill than luck, but I’ve also seen brand new players win 8+ person tournaments.

Do I need to be attending Clojure/West to join? No. This isn’t officially affiliated with Clojure/West and you need not be attending the conference to play. It just happens that a lot of the people playing in this tournament will also be attending Clojure/West.

How should I RSVP? “Purchase” a free ticket on the EventBrite page.

Can I come if I don’t RSVP? Can I invite friends? Yes and yes.

Are there any prizes? Very possibly…

Will you (Michael O. Church) be playing? That depends on how many people there are (if this is a home run and gets 12+ people, I’ll be too busy with administrative stuff). I’ll probably sit in the final rounds but I won’t be eligible to win any prizes.

Corporate atheism

Legally and formally, a corporation is a person, with the same rights (life, liberty, and property) that a human is accorded. Whether this is good is hotly debated.

In theory, I like the corporate veil (protection of personal property, reputation, and liberty in cases of good-faith business failure and bankruptcy) but I don’t see it playing well in practice. If you need $400,000 in bank loans to start your restaurant, you’ll still be expected to take on personal liability, or you won’t get the loan. I don’t see corporate personhood doing what it’s supposed to for the little guys. It seems to work only for those with the most powerful lobbyists. (Prepare for rage.) Health insurance companies cannot be sued, not even for the amount of the claim, if their denial of coverage causes financial hardship, injury, or death. (If a health-insurance executive is sitting next to you, I give you permission to beat the shit out of him.) On the other hand, a restaurant proprietor or software freelancer who makes mistakes on his taxes can get seriously fucked up by the IRS. I’m a huge fan of protecting genuine entrepreneurs from the consequences of good-faith failure. As for cases of bad-faith failure among corrupt, social-climbing, private-sector bureaucrats populating Corporate America’s upper ranks, well… not as much. Unfortunately, the corporate veil in practice seems to protect the rich and well-connected from the consequences of some enormous crimes (environmental degradation, human rights violations abroad, etc.) I can’t stand for that.

On the corporation, it’s clearly not a person like you or me. It can’t be imprisoned. It can be fined heavily (loss of status and belief) but not executed. It has immense power, if for no other reason than its reputation among “real” physical people, but no empirically discernible will, so we must trust its representatives (“executives”) to know it. It tends to operate in ways that are outside of mortal human’s moral limitations, because it is nearly immune from punishment, but a fair deal of bad behavior will be justified. The worst that can happen to it is gradual erosion of status and reputation. A mere mortal who behaved as it does would be called a psychopath, but it somehow enjoys a high degree of moral credibility in spite of its actions. (Might makes right.) Is that a person, a flesh-and-blood human? Nope. That’s a god. Corporations don’t die because they “run out of money”. They die because people stop believing in them. (Financial capitalism accelerates the disbelief process, one that used to require centuries.) Their products and services are no longer valued on divine reputation, and their ability to finance operations fails. It takes a lot of bad behavior for most humans to dare disbelieve in a trusted god. Zeus was a rapist, and the literal Yahweh genocidal, and they still enjoyed belief for thousands of years.

“God” is a loaded word, because some people will think I’m talking about their concept of a god. (This diversion isn’t useful here, but I’m actually not an atheist.) I have no issue with the philosophical concept of a supreme being. I’m actually talking about the historical artifacts, such as Zeus or Ra or Odin or (I won’t pick sides) the ones believed in today. I do have an issue with those, because their political effects on real, physical humans can be devastating. It’s not controversial in 2014 that most of these gods don’t exist– and it probably won’t be controversial in 5014 that the literal Jehovah/Allah doesn’t exist– but people believed in them at a time, and no longer do. When they were believed to be real, they (really, their human mouthpieces) could be more powerful than kings.

The MacLeod model of the organization separates it into three tiers. The Losers (checked-out grunts) are the half-hearted believers who might suspect that their chosen god doesn’t exist, but would never say it at the dinner table. The Clueless (unconditional overperformers who lack strategic vision and are destined for middle-management) are the zealots destined for the low priesthood, who clean the temple bathrooms. Not only do they believe, but they’re the ones who work to make blind faith look like a virtue. At the top are the Sociopaths (executives) who often aren’t believers themselves, but who enjoy the prosperity and comfort of being at the highest levels of the priesthood and, unless their corruption becomes obnoxiously obvious, being trusted to speak for the gods. The fact that this nonexistent being never punishes them for acting badly means there is virtually no check on their increasing abuse of “its” power.

Ever since humans began inventing gods, others have not believed in them. Atheism isn’t a new belief we can pin on (non-atheistic scientist) Charles Darwin. Many of the great Greek philosophers were atheists, to start. Buddha was, arguably, an atheist and Buddhism is theologically agnostic to this day. Socrates may not have thought himself an atheist, but one of his major “crimes” was disbelief in the literal Greek gods. In truth, I would bet that the second human ever to speak on anthropomorphic, supernatural beings said, “You’re full of shit, Asshole”. (Those may, however, have been his last words.) There’s nothing to be ashamed of in disbelief. Many of the high priests (MacLeod Sociopaths) are, themselves, non-believers!

I’m a corporate atheist and a humanist. My stance is radical. From most, these gods (and not the people doing all the damn work) are claimed to be the engines of progress and innovation. People who are not baptized and blessed by them (employment, promotion, good references) are judged to be filthy, and “natural” humans deserve shame (original sin). If you don’t have their titles and accolades, your reputation is shit and you are disenfranchised from the economy. This enables them to act as extortionists, just as religious authorities could extract tribute not because those supernatural beings existed (they never did) but because they possessed the political and social ability to banish and to justify violence.

I’m sorry, but I don’t fucking agree with any of that.

Some astonishing truths about “job hopping”, and why the stigma is evil.

A company where I know at least 5 people just went through a massive, and mostly botched, reorganization. Details are useless here, but I’m struck by the number of high-functioning people who developed physical and mental health ailments (at least four that I know of, and probably tens that I don’t) in the chaos. It got me thinking: why? Now, I understand why an individual would fear job change or loss, because it’s a big deal to be jobless when the rent is too damn high. What I mean is: why do we, as a society, care enough about work to make ourselves sick? We wouldn’t even have to do less work to avoid sickness, because most of the stuff that makes people ill is socioeconomic chatter and anxiety that’s not productive; we could do have just as much stuff (or much more) just by working efficiently. That’s a big economic problem and a bit beyond the scope I can tackle here. I’m going to focus on our biggest grunt-level issue (the “job hopper” stigma) and one that’s holding us back morally, industrially and technically.

I’m completely on board with people wanting to do their jobs well (eustress) and be recognized, because ambition and a work ethic are admirable and I don’t have much in common with those who lack them, but how on earth is anything we do in white-collar work remotely important enough to deserve a sacrifice of health? I can’t come up with a good answer, because I don’t think there is one. Except when lives are at stake (e.g. medicine, war) there’s simply no job that’s worth giving up one’s mental or physical well-being. Does the typical private-sector project ever merit the risk of developing a long-term condition like panic disorder or depression? “Fuck no, sir” is the only right answer.

To be fair to the executives involved, it’s probably impossible to carry out any corporate action, good or bad, without the stress having some effect on peoples’ health. You can’t expect a CEO of a large company not to do something he considers necessary because it might stress someone out. That’d be a ludicrous demand. Some people get stressed out by the tiniest things. Most of the blame (for peoples’ health issues, at least) doesn’t go to the top. An executive’s perspective is that jobs change or end and that people should just deal with it. I don’t disagree with that at all! I believe fully that society should implement a basic income and I think it’s utterly criminal how this country has tied health insurance to employment for so long– I think people should suffer far less from job volatility than they do, in other words– but I think that this volatility is essential to economic progress. Sometimes, the cuts are going to be initiated by the employer (preferably with a generous severance and continuing career support). Other times, the employee is quick enough to see the dead end ahead and leaves on her own. There is, unfortunately, a problem. There’s something that makes job changes a million times more stressful, and it’s the creepy realization that, after a certain number of employment changes, a person is branded as a hot mess and, at least for the top positions, untouchable. That’s the dreaded job hopper stigma, and it needs to die. I’m here to slaughter it, because God works through people. It won’t be pretty.

In the case that inspired this essay, I don’t even know if those four people (who became sick due to reorg-induced stress) wanted to leave their company. One of them seemed quite happy there. What made them ill, in my mind, is not that they work for a bad company (they don’t) but the feeling of being trapped: the knowledge that, for the first 18 months on a job, one pretty much cannot leave it no matter what happens. One could be severely demoted in the first month of a new job and it would still be suicidal to leave.

Why is there a job hopper stigma? In part, it’s because HR offices need some objective way to “stack rank” external candidates, and dates of employment are the hardest to fudge. Titles get tweaked or bought (“exit promotions” for the employee to go gently with a meager or no severance, or inflated titles in lieu of raises, and then there are the self-assigned ones) and accomplishments get embellished (or just plain made up) and people can claim whatever they want in salary and performance bonus (and sue an employer who contradicts them, for misappropriating confidential information). Dates of employment, however, are objective and readily furnished by even the most risk-averse HR offices. So the duration of the job becomes the measure of how successful it was. Under 6 months? That person was probably completely useless and fired for performance, and certainly didn’t accomplish anything. (Some of us are “10x” players who can achieve a lot in 6 months, but HR judgments are based on the mediocre, and not on us.) 6 to 17 months? Needs explanation, and the candidate will have to sell herself hard just to get back to par. 18 to 47 months? Probably a middle-of-the-pack performer, passed-over for promotion but still worth talking to… if there aren’t other candidates. (I’ve heard that long job tenures, beyond 6 years without rapid promotion, can also be damaging, but that has never been a problem for me. I’m not one to stick around and stagnate.)

One short-term job (under 15 months) is seen as forgivable, but two becomes “a pattern” (note: “a pattern”, in HR-speak, means “lather my frenulum, bitch”). At three, you’ll spend 30% of your time on job interviews explaining away your past, leaving you at 70% capacity to convince them of your fit and potential in the future. (In other words, you’ll spend so much energy proving that you’re not bad that you’re enervated when it comes to what should actually matter: showing that you’re good.) At four, you’re branded a total fuckup and many HR departments won’t even return your calls, the smug cockbags. The “job hopper” stigma is real and it needs to die, now. Any dinosaurs who cling to it can, for all I care, go along with it. White-collar employment world’s obsession with shame and embarrassment and social position, instead of excellence and progress and positive-sum collective efforts, is the single thing most responsible for holding us back as a society.

Here we go…

I’m 30 years old and I’ve had a ridiculous number of short-term jobs. I’ve stopped being ashamed of my past, because I’ve done little wrong and what wrong I have done has already been over-punished. I’ve paid my fucking debts. I’ll go over my past, just to show how common it can be for a good person to just be unlucky. Nothing I’m about to talk about is atypical in volatile industries like finance and tech. Jobs end, people leave, turnover happens, and often it’s no one’s fault.

Job hop #1 was a small company that just didn’t have much need for high-power work, and I wasn’t interested in the regular, day-to-day stuff once I’d met their R&D needs, so I left (a few R&D projects completed, stellar quality) when my work was done. #2 was health related; the real-time nature of the work mandated an open-plan office– a rare case where that layout was beneficial and necessary– and my panic issues weren’t nearly as manageable then as they are now. (These days, 20 minutes after a panic attack I am fine and back to work.) No one’s at fault, and I still recommend that firm highly when asked about it, but I chose to leave. #3 was Google, a good company I’ve made my peace with, but where I had a manager so awful that the company formally apologized. #4 (which I’ve taken off my resume) was an absolutely shitty startup that hired three people (of whom I was one) for the same “informal VP/Eng” role, which was bad enough, but then asked me to commit felony perjury (at 3.5 months) against 10 colleagues with “too much” equity, and fired me when I refused. #5 was a large-scale layoff in a terrible year for that company, and sad because I know I management liked me (I got top performance reviews, but my project was cut) and I liked them. They treated me well while I was there and afterward. That’s 5 jobs that ended before the 18-month mark, and while three (#1, 2, 5) involved no managerial or corporate malfeasance, only one of them (#1, the most innocuous) could be legitimately called my fault.

Needless to say, that stuff gets asked about during job interviews. It’s annoying, because it has me starting from a disadvantaged position. After all, I broke that corporate commandment: thou shalt not leave a job before being in it for 18 months. Even in 2014, the same senile Tea Partiers who want “government hands off my Medicare!” insist on upholding their archaic “job hopper” stigma and write uninformed, syphilitic blog rants about how one should never hire a “job hopper”.

The image of a job hopper is of a mercenary, young executive born in the Millennial generation (1982-2004) and hustling his way up the corporate ladder by exploiting the winner’s curse. Because the Boomers did such a good job of raising this generation (ha!) these suburban-bred “trophy kids” have never known adversity (ha!) and they have a keen ability to exploit favorable (again, ha!) market conditions. Instead of “paying dues” and suffering like they’re goddamn s’posed to, “kids these days” jump for a better opportunity at another firm. (In reality, job hops reset the clock and mean that one will have pay dues and prove oneself again, but most Boomers don’t realize that, because they never had to live the way we do.) On a side note, in no way did the Boomers steal the future from the Xers and Millennials. That stuff about housing prices and college tuition costs and non-dischargeable student debt and health insurance premiums and adjustable-rate mortgages and the death of private-sector basic research pushing PhDs into formerly BA-level jobs is Soviet propaganda designed to make capitalism look bad. It’s all lies, I tell you, lies!

Boredom

Job volatility is more of an issue for young people now than it was 30 years ago. Layoffs happen, positions change, redundancies emerge, and projects get cancelled. It happens, and it’s not even a bad thing. Economic progress virtually requires change that ends jobs. The failure is more in the fact that our society is failing to manage this volatility, by trading up (or, thanks to outsourcing, cheap) instead of training up. This has all been discussed ad nauseum, so let me get into a (possibly generational) subjective factor that hasn’t been discussed: increasing boredom. If I’m right, this would make it especially difficult for those who should be high-performers.

I suspect that, today, low-level jobs white-collar jobs are more boring. We’ve achieved a level of boredom in white-collar work so impressive that many people prefer fucking Farmville. Now, that must be some boring work! People will disagree and point quickly to new technology that is supposed to automate all the grunt work away. I agree that this is a possibility of technology. I don’t know if it’s actually being used that way. The social expectation of 40 hours’ work per week, minimum, to stay in the organizations that employ most of the middle class is actually generating a lot of work that is pointless and boring. Work expanding to fill the allotted time (which may be 50 to 70 hours per week in organizations claiming to have “performance-oriented” cultures)  seems to be having some pretty serious consequences for peoples’ mental health.

I’m a technologist, so I love what computers can do, but much of what they are actually used to do is, to be frank, pretty fucking dismal. Computers make excellence possible in new ways, but most executives just want to make mediocrity cheaper (often externalizing costs, which is fancy speak for “robbing people”) to get a quick bonus. Information technology has been used as a centrifuge of work, enabling organizations to separate the labor they management thinks it needs (work without executive sanction, no matter how important, gets ignored in a closed-allocation company) into strata and specialties and keep people, as much as possible, working on the same goddamn thing every day. The monoculture and the restraining permissions systems are supposed to limit operational risk, but it actually introduces a long-term and subtle but existential risk: mediocrity, which sets in as people get bored (on the aggregate scale) and check the fuck out.

Most people I know want heterogeneity in their work: a mix of collaboration and isolation, a portfolio with high-risk creative projects but also low-risk (but still fulfilling) “grunt work” that is reliably useful, and enough variety in projects to build a unique personal understanding of what they’re doing. Modern project management doesn’t respect that. The work gets chewed down into boxy little quanta (Jira tickets! user stories!) and the individual worker ends up mired in a psychological monoculture. This happened with physical labor about 200 years ago, but the combination of information technology and upper-management psychopathy has been, over the past 30 years, doing the same to much mental work. We’re seeing an epidemic of a mental-health issue that, until recently, was pretty rare: extreme, soul-crushing boredom. I’m not talking about spending two hours in a traffic jam. That sucks but it’s a one-off. Nor am I talking about mundane, “unsexy” subtasks in more interesting long-term project, because even the best jobs are 90% mundane, hour by hour– even the best software projects involve lots of debugging– and we’re fine with that. Nor am I referring to the nagging “I might be wasting my life” sensation that people get sometimes but can ignore when work needs to be done. That’s not what I’m talking about when I talk about boredom. I’m talking about a psychiatric “brick wall” that is probably neurological (connected to a miserable disengagement response that, when it fires without a known context or too often, is called clinical depression) in nature.

If you’ve never experienced it, here’s a task that might bring on the “brick wall”. You must draw (by hand) a 56-by-56 grid of squares, 0.5 inches (1.25 cm) on each side with a 1% side-length tolerance (if you fail, scrap the drawing and start over) in side lengths and a 2-degree angle tolerance. Each box must be drawn individually (i.e. it’s not legal to draw 57 horizontal lines and 57 vertical lines) but no duplicated edges. Each segment connecting grid points must have no more arc length than 1.005 times that of a perfectly straight line (“straightness” requirement). You may use a ruler for measurement but not as a straight-edge: the segments must be free-drawn. Finally, boxes must be drawn in order from the upper left, left to right, then top to bottom. As they are completed, the boxes must then be filled with the numbers 1, 2, …, 3136 in that order. (If any work is done out of order, you must redo the entire task.) The digits must be legible, they must fit inside their box, they must be aesthetically pleasing (vague requirement) and reasonably close in size as well as centered within the box. As a last criterion, the large 56-by-56 square must meet a 0.02% side-length tolerance (on the 28-inch total size) and a 0.5-degree angle tolerance. (If this is not met, discard it and start over.) You will be in a noisy environment, you will be yelled at from time to time– and you are expected to smile and say “Hello, sir”, make eye contact for at least 1 but no more than 4 seconds– and you will be periodically interrupted (context switch) and asked to solve simple math and spelling questions. (If you get any wrong, discard your progress on the grid and start again.) People will be eating, drinking, playing sports, and possibly having sex in front of you, and you are not to interact with or even look at them. (If you do, discard your progress on the grid and start again.) Finally, if you give up or fail to complete the task after 6 hours, your punishment (humiliation) is to call ten random people on your contact list and make “oink” sounds for 15 seconds, then say, “I am an incompetent fuckup and I failed at the most basic task”, and then you must ask them for money (simulating being fired on bad terms). You can never explain, to them, the context in which you humiliated yourself thus.

I would guess that the percentage of people who could bring themselves to do this for a reward of, say, $10,000 is very small. I’m not sure that I’d do it for 100 thousand dollars. The task sounds easy. It is, physically and mentally, within the ability of almost anyone, but it’s psychiatric torture. For me, it was an anxious experience just to write this paragraph.

Where I believe surprisingly many people would fail (or, at least, be tempted to do so) is on the “in-order” requirement on the completion of the boxes and digits. Ever notice how, when performing a tedious task, there’s a tendency to inject some creativity into the process by, say, filling the boxes in, or completing the form, out of order? That requirement may seem stringent, but that kind of conformity is not uncommon in corporate environments where any display of individuality is taken as self-indulgent and arrogation of rank. (That’s where I put it there.) Few employers would force a worker to do the whole project over such a small departure from expectations, but the “why did you do it that way?” wolf-snap (microaggression) that some people direct at any expression of individuality is more than enough punishment to cancel out any psychological reward from doing the task.

If you’re a Boomer executive who thinks boredom is an attitude problem and not an involuntary, neurological brick wall (and one that especially affects the most capable) then get out a pen and draw me a fucking 56-by-56 grid.

So, on boredom, I’m talking about a vicious cycle of involuntary and escalating anxiety that emerges from the cognitive dissonance of a mind forced to contend with an unending slew of work it finds pointless. This is something that most white-collar Boomers haven’t experienced, because the IT-fueled work centrifuge hadn’t been perfected yet, and because their corporate ladder game was (to be frank) just a lot less competitive, but that most Millennials will. That cycle starts with the low-level social anxiety that everyone experiences at work. (People would be “on edge” to find two strangers in their car, much less a hundred in their career.) Even in great companies, that low-level anxiety is there; it’s just that the work offsets it. Novelty can offset, but under psychological monoculture, the reward and novelty stop and the only place to go from the anxious state is into boredom. Anxiety and boredom reduce performance, which causes further anxiety, and so forth, until frank depressive symptoms set in, it’s blamed on the employee, and the employee is let go, usually with a dishonestly-named “performance improvement plan” (PIP) because most companies, these days, are too cheap to pay severance. (Side note: even when performance problems do exist, prefer severance over PIPs. Three months’ pay is astronomically cheaper than having a “walking dead” employee ruining morale for 30 days.) Even if that person has normal mental health in general (“neurotypical”) an employee who’s been through the stress and humiliation of a PIP is probably suffering from diagnosable clinical depression and, if evaluated based on his at-then state, will have enough of a health story to sue (it’s not a good idea; he may not win and it will be terrible for him regardless, but it fucks the company) or push for severance. That’s a whole lot of ugliness that just shouldn’t have to happen.

That system– the typical corporate war of attrition based on social polish and boredom tolerance– doesn’t even work on its own terms, because this malignant nonsense generates no profit. Just as obesity is (in my opinion) only 10% personal fault and 90% the result of systemic issues with our food supply and culture, I tend to think the same of work boredom. Ninety percent of it, at least, is the fault of employers. They could structure themselves in a way that gets more value delivered, has employees happier, and doesn’t induce boredom. And don’t get me started on defective workspaces such as open-plan offices. People subjected to intermittent distraction and unreasonable anxiety, at less-than-liminal levels, will often experience work as “boring” even if the material itself is not the problem. Plenty of studies have shown that when people are subjected to the chronic, low-level stresses of a typically defective work environment (ringing phones, people shifting in their seats, personal space under 200 square feet, intense conversations nearby) they are aware of their underperformance but often attribute it to “boring” material, even when groups in better settings described the same work or reading as interesting.

Boomers see boredom as an attitude problem. “If you’re bored, read a book or go outside!” (“But stay off my damn lawn!”) The stereotype of a bored, “entitled” Millennial is of a mid-20s weakling who just refuses to give up on his adolescent fantasy of an easy job where high pay and recognition, “just for being awesome”, come without sacrifice, and whose rejection of even the slightest compromise with “the real world” leads to parasitism or underperformance. Certainly, those weaklings exist. They’re great anecdotes for shitty human-interest news stories using parentally-funded fuckups with impractical advanced degrees as exemplars of youth unemployment. They’re not common. They’re certainly not the norm. The boredom I’m talking about isn’t, “They won’t give me a corner office so, fuck ‘em, I’ll slack” boredom. It is an involuntary neurological response that occurs because mind rebels against being used in a way that it finds perniciously inefficient and insultingly pointless. And here’s why it’s such an issue for business: the smartest and most creative people are the ones who will fail first. The deeply unethical people who will actually kill your company are psychopaths  and they, because they can allay their boredom by manipulating others, shifting blame, and causing destructive drama, get the least bored and fail last (i.e. win the corporate war of attrition).

The reveal

Here’s the truth about job hopping, as a person who’s had a lot of bad luck and therefore plenty of “hops”. It fucking sucks. It’s not a fun life. I’ve done it enough and I’m a goddamn authority on this topic. Don’t get me wrong. There are plenty of cases where it’s better than the alternative (stagnation) to “hop”. But I don’t think anyone goes in to a job with an attitude of, “I’m going to work here for 291 days, waste half a year learning some terrible closed-source codebase, accomplish very little of note, burn bridges by leaving in the middle of an important project, get a pathetic lateral move for my next gig, and do the same thing all over again.” People go into jobs wanting 5-year-fits. Often, employers don’t realize how much people would prefer not to move around. If they just gave a little bit of a shit about their people, they’d be paid handsomely in loyalty. Take Valve’s culture of open allocation. Why do people work so hard in an open-allocation regime when they could “more easily” (I’m not actually sure this is true) get away with slacking? The organization is defensible. The workers give a shit, because they know they’ll be at risk of ending up in crappy closed-allocation jobs if the company fails or has to let them go. Most organizations rot because there’s a bilateral not-giving-a-shit that between management and labor, but defensible organizations like Valve, however, are capable of organic self-repair. The people don’t just want to succeed “at” the organization but they have a genuine commitment to keeping the principles it represents alive. Organizations like that, alas, are very uncommon. Most businesses exist only for one reason (shareholder enrichment) that barely includes the employees, and are worth leaving when they cease to suit the individual’s ambition.

As I said before, job hoppers get stuck paying more dues (even at executive levels, everyone has to pay dues to gain credibility in a new firm, because explicit authority doesn’t go very far) and, in general, see less advancement than they’d have had if they’d been able to stay put. Trust me that every sane person would rather do a little grunt work (appreciated grunt work with mentorship, at least) than be unemployed, interview for jobs at various (mostly shitty) companies, or spend 8 hours per day in the incoherent, tactical socialization process called “networking”. People want to work, to be challenged, and (in general) to stay put for a while and recover their energies. Work, if appreciated and meaningful, is energizing. Moving around is draining. Watching others accomplish more because they get the elusive “5-year fit” (not just a 5-year job, because it’s easy not to get fired, but a genuine 5-year stream of better projects) and not having that, is demoralizing. People don’t want to have to prove themselves to a new set of people every 14 damn months: that’s just hell.

Why’s there so much hopping? Because most jobs are terrible, that’s why. Let’s just get that out there. There are managers who take their reports’ career goals and advancement seriously, but most don’t. There are companies dedicated toward excellence rather than executive ego-stroking, but most aren’t. There are open allocation companies where people have a real say in what they work on and are trusted to “vote with their feet”, but most are closed-allocation shops where they’re just “assigned” to tasks. Most jobs are dead-end endeavors that become pointless after 12 months, and most organizations are not (in the meaning of the word used above) defensible. Sturgeon’s Law (“90 percent of everything is crap”) is utterly true in the corporate world. I’ll spare the reader any pompousness I could put here about Poisson distributions and get to the fucking point: statistics back me up that good people can just be unlucky and have a string of 3 or 4 or many more short-term jobs and have it not be their fault.

This role of bad luck in the “job hopper” death spiral is enhanced by autocorrelation. (Failures that are perceived as independent and indicative of personal shortfall might be subtly connected.) First, short job tenures tend to erode an applicant’s desirability and beget low-quality jobs (that are more likely to be short-term) in the future, a case of that job-hopping dynamic perpetuating itself. Second, there’s the Welch Effect, named for Jack Welch’s popularization of “rank-and-yank” (in technology, stack ranking). The Welch Effect is that, in a large layoff or in stack-ranking (which is just a dishonest, continual layoff dressed as being “performance-related” to save on severance) the first people to be fired are new or junior members of underperforming teams (who had the least to do with the team’s underperformance). Since stack-ranking (and, less so, layoffs) tend to have a “no infanticide” rule, people in their first 6 months are usually fine, but months 7 to 18 are extremely dangerous, and it’s not good to spend much time in them. The Welch Effect also enhances the “job hopper” stigma, because people with damaged resumes are more likely to end up on teams and projects that can’t be filled internally, and have carry a high probability of them being Welch Effected.

The “job hopper” stigma doesn’t always keep people from being able to find new gigs. What it does is shuttles them into second-tier jobs, long-shot companies, and teams that companies have trouble staffing internally. The only way out that seems to work is to accept that job searches will be very long (6+ months, as most first-rate jobs aren’t available to a “job hopper”) and “reverse interview” companies aggressively. Then, you’ve got a good chance of eventually (how are your finances? can you hold out for a few months?) getting a job and a company that are good, at least at the time you take the offer. Whether it stays good for 18 months and frees you, that’s another matter.

Background, but I’ll spare you the shitty teenaged poetry.

Now, I didn’t go to Stanford. When I was 17, I wanted to be a poet. Math was my backup plan. Computer programming? I grew up in Central Pennsylvania and for someone to make even $75,000 in software was unheard-of. A “programmer” was a guy who wrote cash-register interfaces in COBOL. Doctors, lawyers, executives, and these far-off and vaguely disliked people called “investment bankers” in New York made the money, and it wasn’t until 2007 (age 24) that I realized one could be a full-time coder (that’s what quants are) and be paid decently. My parents pushed me to major in math (I’m very glad I listened to them, because I’m a lot better with artificial neural networks than at slam poetry) but my first love was creative writing. Later, I did some (tabletop) game design and created Ambition, a great card game that will never make me a cent. Anyway, on the writing, I resisted (to my career detriment) early specialization and so, even though I had Ivy options– I didn’t get Harvard, but was offered in by a brand-name professor to another Ivy, and very likely would’ve gotten MIT or Stanford– I went to a liberal arts college in the Midwest (Carleton) because I figured that the experience (and rural setting) would be more conducive to the reflective mind a writer needs.

I got a fantastic education, but the exposure to the machinery of the working world (i.e. how very rich people think, how the gigantic machines they create tend to operate, and how to exploit those for personal benefit) was just not on the out-of-course curriculum as it would have been at, say, a Stanford or Harvard. Those skills turn out to be very important even in the “meritocracy” of software. There is no reason for me to be bitter about this now, and I have no regrets about the choices I’ve made, but there are “job hops” I could have avoided if I’d learned, much younger, how to spot warning signs. I had to learn how to fight (and I did so, in the best place for it, which is New York) on my own, and I got fucking good at it too– that’s why I help other people fight, costing malignant executives whom I’ve never met to the tune of millions per year– but it took years of trial and error, because I had to learn all of those skills by myself. I became the mentor I wish I’d had.

I’ve been in the private sector for almost 8 years, and I can count 39 months that I wouldn’t trade for that time back in youth. The rest was junk. No career value, nothing learned, just shitty grunt work I did because I actually would have suffered more if without an income. What’s amazing, though, is that my ratio (41.5 percent) is extremely good by the standards of people from middle-class backgrounds. The typical state-school kid from Iowa (let’s say he is of comparable talent to me, though talent doesn’t really have much to do with it) might have had 15 months of real work by my age. If he was actually poor and couldn’t go to college, I don’t know that he’d have any real work, in the software world, by age 30. If you’re from a well-connected family, you’ve got a decent chance of getting the “Why are you wasting your time on bullshit?” intervention/mentor that everyone hopes for, the sort of thing that 5-year-fits are made of, on the first gig out of school. Everyone else has to job hop, roll the dice, self-mentor and keep trying.

I’ve said before that open-plan offices are backdoor health/age discrimination, and the “job hopper” stigma is backdoor class discrimination. People from wealthy families (I’m not talking about Pennsylvania upper-middle-class like me, but the well-connected “1%”, and it’s more like 0.2 percent) can relax a bit and wait for opportunities to come to them. They’ll probably get some grunt work, but if they perform poorly at it while networking and building side projects, they’ll get the benefit of the doubt (“it’s OK, they had other things in their lives”) and fail up. They can stay in one place for 5 years, because others will come to them whereever they are. Everyone else, on the other hand, has to hustle and play the game. The most important aspect of that game is recognizing a dead end and cutting losses. Often the most ethical thing (I’ll get to ethics, shortly) a person can do when at a dead end is, for the benefit of both sides, to extricate herself from the situation. (It’s possible to exploit dysfunction for personal benefit, but it’s a shitty thing to do. Leave compassionately and move on.)

This where I perceive an inconsistency. Why is job hopping really so despised? Is it because, as claimed, the job hoppers are showing the flippant disloyalty that comes from the high status afforded to in-demand professionals? Or is the real (yet unmentioned, because it’s socially unacceptable) reason for the stigma, as I suspect, that this tentative attitude toward work, and hard-nosed realism about the value of what one is doing, is a cultural and social mark of The Poors, for whom useless work historically had devastating consequences? Are job hoppers high-flying mercenary yuppies enjoying undeserved success, or are they ill-bred uppity serfs who lack the blue-blooded couth to get upper management invested in their careers and merit a 5-year job? Which one is it: are they high-status assholes or low-status assholes? The people who still believe in this antiquated stigma have to pick a damn side!

Ethics of job hopping

One thing that is said about job hoppers is that they’re “disloyal”. I don’t agree. There’s a difference between being loyal or not loyal to someone (loyalty must be earned) and being constitutionally disloyal. I will not harm a stranger but I have no loyalty to him, and that’s not a character flaw on my part. To me, there’s just no use in being loyal to a company. A person can earn loyalty, for sure, but a company is just a pile of other peoples’ money. On whether job hoppers are constitutionally disloyal, I think that the latter is very uncommon, in in fact. It’s a brutal charge, because a constitutionally disloyal person is likely to be unethical. Is job hopping unethical? Absolutely not. Second to masturbating, it’s one of the most honest things people do: walking away from a relationship that has begun to fail, before it hurts both sides.

An unethical person has no qualms about drawing a salary while producing no useful work (due to mismanagement, boredom, or poor fit) while ethical people go insane in such circumstances. Ethical people worry, when that starts to happen, about getting fired and the shame and embarrassment. (This ties in to the boredom-anxiety loop I discussed above.) Unethical people figure out who’s politically weakest in the organization and who they should blame, should they either underperform or be unable (due to advanced environmental dysfunction) to perform. Ethical people leave jobs when they find themselves becoming useless. Unethical people ingratiate themselves to upper management and acquire power, turning organizational dysfunction toward their own benefit. Ethical people focus on skill growth and leave jobs if they risk stagnation. Unethical people realize that connections are more powerful than skills and focus on the players, not the cards. In general, unethical people are far more likely to climb one ladder instead of “hopping”, because unethical people generally understand social dynamics far better than average people (this may be survivor bias, with unethical and socially unskilled people getting incarcerated, leaving only the smooth scumbags in play) and that trust is acquired over time. Ethical, ambitious people want universal knowledge so they can add more value to the world. Unethical people want to learn the people so they can exploit their weaknesses. That’s how it actually works.

To a middle manager, job hoppers can be irritating. Middle management isn’t a fun job because there is pressure from above and below, and unexpected personnel changes can be very disruptive. Moreover, that effect is quite visible, the action (of leaving for another job) is wholly initiated by one person and, once she has left the organization, she can be blamed without consequence (she’s not there to hear it, and her job’s not in danger). I know for sure that, in many of my “hops”, people were disappointed that I left. This is just standard business friction, though. No one’s doing anything wrong. That people leave companies is not an ethics problem.

The nature of social stigma in general

I would argue that for most social stigmas, the reason they exist is that people tend to correlate (falsely and uselessly) things that are irritating or socially unacceptable with the unethical. They want to believe in a world where the villains look like villains (instead of like regular people, as they actually do). The small betrays the large, they presume. The guy who shows up 15 minutes later than everyone else is a slacker. The guy who complains about minor things is probably subversively and intentionally undermining morale. The woman who doesn’t return small talk because she’s fucking coding is a frigid bitch. In reality, nothing works that way at all, but many people (and this is more true of people in groups) are stupid and shallow and in their desire to believe in a world where villains look like villains, they lash out at those who slightly annoy them.

In actuality, most people who do bad things get away with them, at least in the short term. I believe in the Eastern concept of karma– each action leaves an imprint on the mind, and the fact that we have no idea what our minds do after death requires humility– so I might argue from the other side, over the very long term. However, in the social theater where humans seeking short-term gains operate and where the punishments are coarse rather than subtle, most people who do bad shit get away with all of it. By the time people with the power and desire to punish them know what these unethical scoundrels have done, they’ve moved away and usually “up”.  My pointing this isn’t to encourage bad behavior, because the gains of most unethical acts (again, counted in raw numbers) are petty while the risks are substantial. A 99% chance of not getting caught in stealing a candy bar doesn’t make it worth it, given the penalties.

I mentioned the self-healing properties of defensible organizations like Valve, which operates under open allocation and gives the rank-and-file a legitimate reason (projects they enjoy, and not wanting to work for crappy closed-allocation companies) to participate in its upkeep. Those companies fix themselves faster than they rot, but they’re also rare because most executive teams lack the coherence, vision, and (frankly) the interest to commit to a self-repairing organization. So, most organizations are not going to commit to employee well-being any more than they have to, and won’t be defensible. They’ll rot, and that’s accepted (because executives will enrich themselves along the way) but they’d prefer it to rot slowly. This requires targeted aggression toward the causes of organizational rot, which are (and I agree with them on this) ambitious, disloyal, and unethical people.

Now, unethical people can beat (or, quite often, use as a personal weapon) the social immune system of any organization. They can pass reference checks, establish social proof, and avoid having their bad behavior catch up with them for a long time. Some (the less capable) may be shut down, but other psychopaths will evolve faster, just like cancer cells or harmful bacteria, and go effectively undetected. The organization will rot, but no one will be able to say why it is rotting because the people doing the damage will be sure that they only people who know are either powerless or complicit. What everyone sees, as the edifice starts to shake and crumble, is the exodus of talented people. That’s the visible rot. It’s all those damn job hoppers jumping ship when things get “difficult” (which usually means “hopeless”). Waves of departures (“job hoppers”) may be the visible proximate cause of corporate collapse– and that’s why they are blamed for things falling apart– but they’re rarely the ultimate cause.

Let’s ask ourselves if these “job hoppers” fit the bill of the toxic person (ambitious, disloyal, and unethical). Are job hoppers ambitious? Some are, some are just fed-up. That’s irrelevant, however, because a functioning organization can make a home for ambitious people. Are they disloyal? I would say they’re simply “not loyal”. Disloyalty suggests a moral shortfall. Not-loyalty should be the default afforded a large organization that wouldn’t reciprocate any loyalty given to it. Like religious faith, “loyalty” is not a virtue when unqualified. It is fine to have religious faith, and it is fine not to have it. The same goes, in my mind, for organizational loyalty. Are these “job hoppers” unethical? That’s the only of these three questions that actually matters, and I’ve established the answer to be “no”. Among the discontents (and, in a dysfunctional organization, that’s over 75% of the people) they’re some of the most ethical ones. They’ve realized that there is no place for them, and left. The real anger should be at the things that happened (and the people who caused them) 3 or 6 or 12 months ago that caused so many good people to leave.

If you stop promoting from within, soon you can’t.

I’ve been around and inside of tech companies enough to know that, as a general rule, they don’t promote from within. Why? One VC-specific reason is the extreme amount of power held by venture capitalists, who function as managers rather than passive investors. VCs’ buddies, middle-aged underachievers who need to pull a lucky connection to snag an executive position in a fast-growing company, tend to be inserted into such companies at high levels. That sets a tenor: that internal achievement matters less than the story you can tell as a free agent, especially if you can play the press and the investors’ social network. The issue doesn’t stop there, of course. Even for mid-level management positions and the higher engineering distinctions (once those exist) the best slots and projects tend, over time, to go to external people.

Two things drive this. The first is the social climbing mentality that a growing technology company has. Most founders think that they’re a higher calibre of people than the first engineers they hire. (Are they right? It depends on how they hire.) The first round of hires are often brought on with a bit of compromise: not exactly what the company wants, but good enough “for now” and affordable on a startup’s shoestring (relative to investor demands) budget. Now, a good entrepreneur will find talented people who’ve landed in the “bargain bin”: wrong schools, wrong career choices, or coming back into tech from something else. (Keep in mind that, at the level I’m talking about, “bargain bin” is still the $75-150k salary range– reasonable, but not the level seen at hedge funds or for established tech people. I’m talking about top-1% talent. Don’t bet your company’s start on anyone else.) That’s almost an arbitrage, for those who can detect talent at high levels. But for each of those “diamond in the rough” hires, there are 9 who are equally cheap but correctly priced (i.e. not very good). In the startup phase, these companies tend to assume that their early technical hires were “desperation hires” and throw them under the bus as soon as they can get “real” engineers, designers, and management. That social-climbing dynamic– constantly seeking “better” (read: more impressive on paper) people than what they have– lasts for years beyond the true startup phase. The second driver is the tendency of almost all companies to overpromise in hiring. Authority is zero-sum, and when authority is promised to external executives being sought, internal people must lose some. The end result is that the best projects and positions go first to people the company is trying to sell on joining it. Only if there are some goodies left over are they allocated to those who are already there.

Executive turnover hits morale in two ways. The first and more obvious one is that it makes it clear that the straight-and-narrow, pay-your-dues path is for losers. The second is more subtle. High-level turnover and constant change of priorities and initiatives means that the lower-level people rarely get much done. They don’t have the runway. Ask a typical four-year veteran of such a company what he’s accomplished, and he’ll tell you all about the work that is no longer used, the project canceled three months before fruition because a new CTO arrived, and the miasma of unglamorous work (i.e. technical integration, maintenance necessary due to turnover) generated by this volatility that, while it might have been necessary to keeping the company afloat, doesn’t show macroscopic velocity. That doesn’t make the case for promotion or advancement. Eventually, the high-power people realize that they can’t get anything done because of all the executive instability, and they leave.

At a certain point, companies get to a state where they no longer trust their own people to rise to any challenge beyond what can fit in a single Jira ticket. Promotion from within essentially stops. Titles will go up (especially because salaries won’t) but true advancement will be hard to find. Then, people stop leading.

One concept often used in corporate-speak is the distinction between “confirmational” and “aspirational” promotions. In a confirmational regime, people are promoted when they’re already operating competently at that level. If you’re a Senior Engineer, it means you’ve been performing as one for some time. Aspirational promotions indicate a belief that the employee will achieve that level at some time. Of course, every company will claim that its promotion system is confirmational. No employee wants to answer to a manager the company “decided to take a chance on” (that’s a sign that the employee is marginal or an underperformer) and no company would ever admit that some of its promotions are mistakes. One would argue that confirmational promotion is the right way to do things– even if it’s usually cited as an excuse for stinginess. (The company myth is that people are fairly evaluated, thus given roles, and compensated based on that role. The company reality is that people negotiate what compensation they can get, and then titles and managerial authority are back-filled to match payroll numbers.) Let’s, however, ignore these complexities and agree, for the moment, that confirmational promotion should be the goal. People should lead, and later be recognized; because “pre-recognizing” people as leaders tends to generate the culture of managerial entitlement that we know to be toxic. Okay, so what does it mean to lead?

I would say that leadership is to do things (1) for the benefit of a group, and (2) that one wasn’t asked to do. One doesn’t need to be a manager to lead: most good programmers are leaders, since the truly excellent ones can’t help themselves from doing work that isn’t in any Jira ticket. Some managers are leaders– they protect the group from external adversity, and drive it toward a coherent shared vision– and some aren’t. In hierarchical companies, managers tend to “manage up”, which means they fail both criteria: they’re favoring their own advancement over the group’s needs, and they’re taking orders from on high. In those sorts of companies, managers become puppet chieftains, mostly there to prevent the group from selecting a leader (who might become an agitator, or even a unionist) of its own.

The idea behind making promotion confirmational is that people should lead, in a genuine positive-sum sense, before they manage. I’d tend to agree. Should promotion be confirmational (i.e. conservative) rather than aspirational? Sure, absolutely. So what’s the problem? Where is the cause of dysfunction?

The problem

Simply put, the system falls apart when one set of people gets confirmationally promoted and others are aspirationally advanced. The latter group, the ones who get the benefit of the doubt, will always win. However, companies typically find themselves forced to promise authority to attract executives, and they’re doing this while knowing nothing about what they’ll do at the organization. External executives are, by definition, aspirationally placed. A good negotiator with strong on-paper credentials, facing a social-climbing company, can always get more authority than his demonstrated ability (passing an interview) merits. Over time, those external hires begin to dominate, and the company has an escalating sense of being run “from elsewhere”. Moreover, this incentivizes both political behavior and job hopping. Now, I’m notoriously pro-”job hopper”. Ethical job hoppers (and most “hoppers” are far more ethical than traditional ladder climbers; when they lose faith in an institution, they leave it instead of abusing it) shouldn’t bear the stigma they do, because when the norm is for companies not to recognize internal achievement, it’s the best individual strategy. (Hate the game, not the player.) What I recognize, however, is that it’s not healthy for individual companies to have high turnover; but that’s exactly what they encourage when they fall into the pattern I describe.

Most companies, internally, have slogans like “leadership is action, not title” or “act like an owner”. They tell people that they should rise to the occasion and lead (as I defined the term, above) regardless of whether they’re given official authority. Those slogans are mostly empty talk, but even the most hierarchy-friendly executive will agree that the alternative (an army of disengaged clock-punchers who don’t do anything unless explicitly told to do it) is worse. If, however, a company loses faith in its own people (as constant external promotion suggests) then this is a really bad strategy for the individual. In most companies, attempting to lead without authority is a fast-track to getting oneself fired. Now, losing a job often comes with severance, but it might not, especially as companies replace honest layoffs with phony “performance” cases. The risk of losing 2-4 months of income (not to mention starting over, having another thing to explain in future interviews) is pretty much never justified by whatever upside “acting like an owner” might have. (A 10% “performance” bonus on the upside? Not worth risking getting fired. Trust me.) People (even managers) just stop leading after a while. This tends to make a company “comfortable” insofar as individual performance expectations become very low and the clock-puncher mentality becomes the norm– as long as you’re not obvious about it, you can coast– but it’s not what anyone really wants.

Solution?

The core of the problem, I think, is that companies are inherently going to make sweeter promises to those it is trying to entice to join than it will offer to those who are already there. Later hires usually get better deals (“salary inversion”, in HR terms). Traditionally, technology startups offset this issue by offering superior equity slices to early people; but, in 2014, employee option grants are so pathetic that this is no longer true. Just as much as this is true of salary and bonuses and titles, it also seems to be true of authority.

Perhaps the problem is managerial authority itself. Now, I’m all for a conceptual hierarchy. That’s just how we think, as humans. We can’t hold more than about six or seven things in short-term memory at once, so for anything complex (high entropy) we need clusters, modules, and pruning of relationships. I just don’t think that rigid managerial hierarchies do much good. They create massive conflicts of interest and elevated classes whose sole purpose becomes to perpetuate their superiority, regardless of whether it benefits the organization.

I’ve written a lot about open allocation, and it seems that the biggest issue with it is that it makes it difficult for a company to hire external executives. They can’t be promised managerial authority if there isn’t much of that to go around. So, the dinosaur types who are used to “being executives” are displeased by the concept of a company where they have to convince people that their ideas are right, rather than just threatening to turn off plebes’ incomes, can no longer be enticed to join. I say: good. Fuck ‘em. Those wankers only cause problems anyway. Open allocation, at least in technology, is the way forward.

Under closed allocation, control over what people work on becomes, like anything else, a bargaining chip or commodity. At that point, this bargaining chip will be used to perform one of the company’s most pressing needs: recruiting. External promotion will become the norm, and internal advancement will cease as the leadership opportunities that might permit them to demonstrate their ability (assuming that promotions are confirmational rather than aspirational– but we all know how complex that little issue is) disappear. Internal promotion is the first to go. But then internal lateral mobility (already reduced to enforce the closed-allocation regime) ceases as well, to pre-empt the chaos that might ensue as people jockey laterally for the best (read: externally career-advancing) projects. Soon enough, not only can internal people not get promoted, but they can’t really move laterally either. Then, people develop a total sense of “stuckness” and the company is asleep. If stack ranking is imposed in an attempt to “wake them up”, you get the warring departments dynamic that tears a company to pieces. There’s no good there. In technology, closed allocation is just a dead end not worth exploring for any reason.

A company that wants to excel needs its people to excel. Externally hiring those who are attractive “on paper” won’t work, because often those executives (who join mediocre companies) are the ones looking for sabbaticals, and those who aren’t will have one foot out the door. But as soon as companies make control over what gets worked on a bargaining chip, the slide to mediocrity is inevitable. External hiring is no solution specifically because those external hires won’t jolt the company into excellence, but be brought down (or repelled) by the mediocrity.

Open allocation– the framework in which “promotion” takes the self-motivated form of greater challenges and larger achievements rather than increasing control over others (a zero-sum game)– is the only answer that I can see.

Look-ahead: a likely explanation for female disinterest in VC-funded startups.

There’s been quite a bit of cyber-ink flowing on the question of why so few women are in the software industry, especially at the top, and especially in VC-funded startups. Paul Graham’s comments on the matter, being taken out of context by The Information, made him a lightning rod. There’s a lot of anger and passion around this topic, and I’m not going to do all of that justice. Why are there almost no venture capitalists, few women being funded, and not many women rising in technology companies? It’s almost certainly not a lack of ability. Philip Greenspun argues that women avoid academia because it’s a crappy career. He makes a lot of strong points, and that essay is very much worth reading, even if I think a major factor (discussed here) is underexplored.

Why wouldn’t this fact (of academia being a crap career) also make men avoid it? If it’s shitty, isn’t it going to be avoided by everyone? Often cited is a gendered proclivity toward risk. People who take unhealthy and outlandish risks (such as by becoming drug dealers) tend to be men. So do overconfident people who assume they’ll end up on top of a vicious winner-take-all game. The outliers on both ends of society tend to be male. As a career with subjective upsides (prestige in addition to a middle-class salary) and severe, objective downsides it appeals to a certain type of quixotic, clueless male. Yet making bad decisions is hardly a trait of one gender. Also, we don’t see 1.5 or 2 times as many high-power (IQ 140+) men making bad career decisions. We probably see 10 times as many doing so; Silicon Valley is full of quixotic young men wasting their talents to make venture capitalists rich, and almost no women, and I don’t think that difference can be explained by biology alone.

I’m going to argue that a major component of this is not a biological trait of men or women, but an emergent property from the tendency, in heterosexual relationships, for the men to be older. I call this the “Look-Ahead Effect”. Heterosexual women, through the men they date, see doctors buying houses at 30 and software engineers living paycheck-to-paycheck at the same age. Women face a number of disadvantages in the career game, but they have access to a kind of high-quality information that prevents them from making bad career decisions. Men, on the other hand, tend to date younger women covering territory they’ve already seen.

When I was in a PhD program (for one year) I noticed that (a) women dropped out at higher rates than men, and (b) dropping out (for men and women) had no visible correlation with ability. One more interesting fact pertained to the women who stayed in graduate school: they tended either to date (and marry) younger men, or same-age men within the department. Academic graduate school is special in this analysis. When women don’t have as much access to later-in-age data (because they’re in college, and not meeting many men older than 22) a larger number of them choose the first career step: a PhD program. But the first year of graduate school opens their dating pool up again to include men 3-5 years older than them (through graduate school and increasing contact with “the real world”). Once women start seeing first-hand what the academic career does to the men they date– how it destroys the confidence even of the highly intelligent ones who are supposed to find a natural home there– most of them get the hell out.

Men figure this stuff out, but a lot later, and usually at a time when they’ve lost a lot of choices due to age. The most prestigious full-time graduate programs won’t accept someone near or past 30, and the rest don’t do enough for one’s career to offset the opportunity cost. Women, on the other hand, get to see (through the guys they date) a longitudinal survey of the career landscape when they can still make changes.

I think it’s obvious how this applies to all the goofy, VC-funded startups in the Valley. Having a 5-year look ahead, women tend to realize that it’s a losing game for most people who play, and avoid it like the plague. I can’t blame them in the least. If I’d had the benefit of 5-year look-ahead, I wouldn’t have spent the time I did in VC-istan startups either. I did most of that stuff because I had no foresight, no ability to look into the future and see that the promise was false and the road led nowhere. If I had retained any interest in VC-istan at that age (and, really, I don’t at this point) I would have become a VC while I was young enough that I still could. That’s the only job in VC-istan that makes sense.

VC-istan 8: the Damaso Effect

Padre Damaso, one of the villains of the Filipino national novel, Noli me Tangere, is one of the most detestable literary characters, as a symbol of both colonial arrogance and severe theological incompetence. One of the novel’s remarks about colonialism is that it’s worsened by the specific types of people who implement colonial rule: those who failed in their mother country, and are taking part in a dangerous, isolating, and morally questionable project that is their last hope at acquiring authority. Colonizers tend to be people who have no justification for superior social status left but their national identity. One of the great and probably intractable tensions within the colonization process is that it forces the best (along with the rest) of the conquered society to subordinate to the worst of the conquering society. The total incompetence of the corrupt Spanish friars in Noli is just one example of this.

In 2014, the private-sector technology world is in a state of crisis, and it’s easy to see why. For all our purported progressivism and meritocracy, the reality of our industry is that it’s sliding backward into feudalism. Age discrimination, sexism, and classism are returning, undermining our claims of being a merit-based economy. Thanks to the clubby, collusive nature of venture capital, to secure financing for a new technology business requires tapping into a feudal reputation economy that funds people like Lucas Duplan, while almost no one backs anything truly ambitious. Finally, there’s the pernicious resurgence of location (thanks to VCs’ disinterest in funding anything more than 30 miles away from them) as a career-dominating factor, driving housing prices in the few still-viable metropolitan areas into the stratosphere. In so many ways, American society is going back in time, and private-sector technology is a driving force rather than a counterweight. What the fuck, pray tell, is going on? And how does this relate to the Damaso Effect?

Lawyers and doctors did something, purely out of self-interest, to prevent their work from being commoditized as American culture became increasingly commercial in the late 19th century. They professionalized. They invented ethical rules and processes that allowed them work for businessmen (and the public) without subordinating. How this all works is covered in another essay, but it served a few purposes. First, the profession could maintain standards of education so as to keep membership in the profession as a form of credibility that is independent of managerial or client review. Second, by ensuring a basic credibility (and, much more important, employability) for good-faith members, it enabled professionals to meet ethical obligations (i.e. don’t kill patients) that supersede managerial or corporate authority. Third, it ensured some control over wages, although that was not its entire goal. In fact, the difference between unionization and professionalization seems to be as follows. Unions are employed when the labor is a commodity, but ensure that the commoditization happens in a fair way (without collective bargaining, and in the absence of a society-wide basic income, that never occurs). Unions accept that the labor is a commodity, but demand a fair rate of exchange. Professionalization exists when there is some prevailing reason (usually an ethical one, such as in medicine) to prevent full commoditization. If it seems like I’m whitewashing history here, let me point out that the American Medical Association, to name one example, has done some atrocious things in its history. It originally opposed universal healthcare; it has received some karma, insofar as the inventively mean-spirited U.S. health insurance system has not only commoditized medical services, but done so on terms that are unfavorable to physician and patient both. I don’t mean to say that the professions have always been on the right side of history, because that’s clearly not the case; professionalization is a good idea, often poorly realized.

The ideal behind professionalization is to separate two senses of what it means to “work for” someone: (1) to provide services, versus (2) to subordinate fully. Its goal is to allow a set of highly intelligent, skilled people to deliver services on a fair market without having to subordinate inappropriately (such as providing personal services unrelated to the work, because of the power relationship that exists) as is the norm in mainstream business culture.

As a tribe, software professionals failed in this. We did not professionalize, nor did we unionize. In the Silicon Valley of the 1960s and ’70s, it was probably impossible to see the need for doing so: technologists were fully off the radar of the mainstream business culture, mostly lived on cheap land no one cared about, and had the autonomy to manage themselves and answer to their own. Hewlett-Packard, back in its heyday, was run by engineers, and for the benefit of engineers. Over time, that changed in the Valley. Technologists and mainstream, corporate businessmen were forced to come together. It became a colonial relationship quickly; the technologists, by failing to fight for themselves and their independence, became the conquered tribe.

Now it’s 2014, and the common sentiment is that software engineers are overpaid, entitled crybabies. I demolished this perception here. Mostly, that “software engineers are overpaid” whining is propaganda from those who pay software engineers, and who have a vested interest. It has been joined lately by leftist agitators, angry at the harmful effects of technology wealth in the Bay Area, who have failed thus far to grasp that the housing problem has more to do with $3-million-per-year, 11-to-3 product executives (and their trophy spouses who have nothing to do but fight for the NIMBY regulations that keep housing overpriced) than $120,000-per-year software engineers. There are good software jobs out there (I have one, for now) but, if anything, relative to the negatives of the software industry in general (low autonomy relative to intellectual ability, frequent job changes necessitated by low concern of employers for employee career needs, bad management) the vast majority of software engineers are underpaid. Unless they move into management, their incomes plateau at a level far below the cost of a house in the Bay Area. The truth is that almost none of the economic value created in the recent technology bubble has gone to software engineers or lifelong technologists. Almost all has gone to investors, well-connected do-nothings able to win sinecures from reputable investors and “advisors”, and management. This should surprise no one. Technology professionals and software engineers are, in general, a conquered tribe and the great social resource that is their brains is being mined for someone else’s benefit.

Here’s the Damaso Effect. Where do those Silicon Valley elites come from? I nailed this in this Quora answer. They come from the colonizing power, which is the mainstream business culture. This is the society that favors pedigree over (dangerous, subversive) creativity and true intellect, the one whose narcissism brought back age discrimination and makes sexism so hard to kick, even in software which should, by rights, be a meritocracy. That mainstream business world is the one where Work isn’t about building things or adding value to the world, but purely an avenue through which to dominate others. Ok, now I’ll admit that that’s an uncharitable depiction. In fact, corporate capitalism and its massive companies have solved quite a few problems well. And Wall Street, the capital of that world, is morally quite a bit better than its (execrable) reputation might suggest. It may seem very un-me-like to say this, but there are a lot of intelligent, forward-thinking, very good people in the mainstream business culture (“MBA culture”). However, those are not the ones who get sent to Silicon Valley by our colonial masters. The failures are the ones sent into VC firms and TechCrunch-approved startups to manage nerds. Not only are they the ones who failed out of the MBA culture, but they’re bitter as hell about it, too. MBA school told them that they’d be working on $50-billion private-equity deals and buying Manhattan penthouses, and they’re stuck bossing nerds around in Mountain View. They’re pissed.

Let me bring Zed Shaw in on this. His essay on NYC’s startup scene (and the inability thereof to get off the ground) is brilliant and should be read in full (seriously, go read it and come back to me when you’re done) but the basic point is that, compared to the sums of money that real financiers encounter, startups are puny and meaningless. A couple quotes I’ll pull in:

During the course of our meetings I asked him how much his “small” hedge fund was worth.

He told me:

30 BILLION DOLLARS

That’s right. His little hedge fund was worth more money than thousands of Silicon Valley startups combined on a good day. (Emphasis mine.) He wasn’t being modest either. It was “only” worth 30 billion dollars.

Zed has a strong point. The startup scene has the feeling of academic politics: vicious intrigue, because the stakes are so small. The complete lack of ethics seen in current-day technology executives is also a result of this. It’s the False Poverty Effect. When people feel poor, despite objective privilege and power, they’re more inclined to do unethical things because, goddammit, life owes them a break. That startup CEO whose investor buddies allowed him to pay himself $200,000 per year is probably the poorest person in his Harvard Business School class, and feels deeply inferior to the hedge-fund guys and MD-level bankers he drank with in MBA school.

This also gets into why hedge funds get better people (even, in NYC, for pure programming roles) than technology startups. Venture capitalists give you $5 million and manage you; they pay to manage. Hedge fund investors pay you to manage (their money). As long as you’re delivering returns, they stay out of your hair. It seems obvious that this would push the best business people into high finance, not VC-funded technology.

The lack of high-quality businessmen in the VC-funded tech scene hurts all of us. For all my railing against that ecosystem, I’d consider doing a technology startup (as a founder) if I could find a business co-founder who was genuinely at my level. For founders, it’s got to be code (tech co-founder) or contacts (business co-founder) and I bring the code. At my current age and level of development, I’m a Tech 8. A typical graduate from Harvard Business School might be a Biz 5. (I’m a harsh grader, that’s why I gave myself an 8.) Biz 6 means that a person comes with connections to partners at top VC firms and resources (namely, funding) in hand. The Biz 7′s go skiing at Tahoe with the top kingmakers in the Valley, and count a billionaire or two in their social circle. If I were to take a business co-founder (noting that he’d become CEO and my boss) I’d be inclined to hold out for an 8 or 9, but (at least, in New York) I never seemed to meet Biz 8′s or 9′s in VC-funded technology, and I think I’ve got a grasp on why. Business 8′s just aren’t interested in asking some 33-year-old California man-child for a piddling few million bucks (that comes along with nasty strings, like counterproductive upper management). They have better options. To the Business 8+ out there, whatever the VCs are doing in Silicon Valley is a miserable sideshow.

It’s actually weird and jarring to see how bad the “dating scene”, in the startup world, is between technical and business people. Lifelong technologists, who are deeply passionate about building great technology, don’t have many places elsewhere to go. So a lot of the Tech 9s and 10s stick around, while their business counterparts leave and a Biz 7 is the darling at the ball. I’m not a fan of Peter Shih, but I must thank him for giving us the term “49ers” (4′s who act like 9′s). The “soft” side, the business world of investors and well-connected people who think their modest connections deserve to trade at an exorbitant price against your talent, is full of 49ers– because Business 9′s know to go nowhere near the piddling stakes of the VC-funded world. Like a Midwestern town bussing its criminal element to San Francisco (yes, that actually happened) the mainstream business culture sends its worst and its failures into the VC-funded tech. Have an MBA, but not smart enough for statistical arbitrage? Your lack of mathematical intelligence means you must have “soft skills” and be a whiz at evaluating companies; Sand Hill Road is hiring!

The venture-funded startup world, then, has the best of one world (passionate lifelong technologists) answering to the people who failed out of their mother country: mainstream corporate culture.

The question is: what should be done about this? Is there a solution? Since the Tech 8′s and 9′s and 10′s can’t find appropriate matches in the VC-funded world (and, for their part, most Tech 8+ go into hedge funds or large companies– not bad places, but far away from new-business formation– by their mid-30s) where ought they to go? Is there a more natural home for Tech 8+? What might it look like? The answer is surprising, but it’s the mid-risk / mid-growth business that venture capitalists have been decrying for years as “lifestyle businesses”. The natural home of the top-tier technologist is not in the flash-in-the-pan world of VC, but the get-rich-slowly world of steady, 20 to 40 percent per year growth due to technical enhancement (not rapid personnel growth and creepy publicity plays, as the VCs prefer).

Is there a way to reliably institutionalize that mid-risk / mid-growth space, that currently must resort (“bootstrapping”) to personal savings (a scarce resource, given that engineers are systematically underpaid) just as venture capital has done to the high-risk /get-big-or-die region of the risk/growth spectrum? Can it be done with a K-strategic emphasis that forges high-quality businesses in addition to high-value ones? Well, the answer to that one is: I’m not sure. I think so. It’s certainly worth trying out. Doing so would be good for technology, good for the world, and quite possibly very lucrative. The real birth of the future is going to come from a fleet of a few thousand highly autonomous “lifestyle” businesses– and not from VC-managed get-huge-or-die gambits.

The U.S. conservative movement is a failed eugenics project. Here’s why it could never have worked.

At the heart of the U.S. conservative movement, and most religious conservative movements, is a reproductive agenda. Old-style religious meddling in reproduction had a strong “make more of us” character to it– resulting in blanket policies designed to encourage reproduction across a society– but the later incarnations of right-wing authoritarianism, especially as they have mostly divorced themselves from religion, have been oriented more strongly toward goals judged to be eugenic, or to favor the reproduction of desirable individuals and genes; instead of a broad-based “make more of us” tribalism, it becomes an attempt to control the selection process.

The term eugenics has an ugly reputation, much earned through history, but let me offer a neutral definition of the term. Eugenics (“good genes”) is the idea that we should consciously control the genetic component of what humans are born into the world. It is not a science, since the definition of eu- is intensely subjective. As “eugenics” has been used throughout history to justify blatant racism and murder, the very concept has a negative reputation. That said, strong arguments can be made in favor of certain mild, elective forms of eugenics. For example, subsidized or free higher education is (although there are other intents behind it) a socially acceptable positive eugenic program: removal of one of a dysgenic economic force (education costs, usually borne by parents) that, empirically speaking, massively reduces fertility among the most capable people while having no effect on the least capable. 

The eugenic impulse is, in truth, fairly common and rather mundane. The moral mainstream seems to agree that eugenics (if not given that stigmatized name) is morally acceptable when participation is voluntary (i.e. no one is forced to reproduce, or not to do so) and positive (i.e. focused on encouraging desirable reproduction, rather than discouraging those deemed “unwanted”) but unacceptable when involuntary (coercive or prohibitive) and negative. The only socially accepted (and often legislated) case of negative and often prohibitive eugenics is the universal taboo against incest. That one has millennia of evolution behind it, and is also fair (i.e. it doesn’t single out people as unwanted, but prohibits intrafamilial couplings, known to produce unhealthy offspring, in general) so it’s somewhat of special case.

Let’s talk about the specific eugenics of the American right wing. The obsessions over who has sex with whom, the inconsistency between hard-line, literal Christianity and the un-Christ-like rightist economics, and all of the myriad mean-spirited weirdnesses (such as U.S. private health insurance, a monster that even most conservatives loathe at this point) that make up the U.S. right-wing movement; all are tied to a certain eugenic agenda, even if the definition of “eu-” is left intentionally vague. In addition to lingering racism, the American right wing unifies two varieties (one secular, one religious) of the same idea: Social Darwinism and predestination-centric Calvinism. This amalgam I would call Social Calvinism. The problem with it is that it doesn’t make any sense. It fails on its own terms, and the religious color it allowed itself to gain has only deepened its self-contradiction, especially now that sexuality and reproduction have been largely separated by birth control.

In the West, religion has always held strong opinions on reproduction, because the dominant religious forces are those that were able to out-populate the others. “Be fruitful and multiply.” This “us versus them” dynamic had a certain positive (in the sense of “positive eugenics”; I don’t mean to call it “good”) but coercive flair to it. The religious society sought much more strongly to increase its numbers within the world than to differentially or absolutely discourage reproduction by individuals judged as undesirable within its numbers. That said, it still had some ugly manifestations. One prominent one is the traditional Abrahamic religions’ intolerance of homosexuality and non-reproductive sex in general. In modern times, homophobia is pure ignorant bigotry, but its original (if subconscious) intention was to make a religious society populate quickly, which put it at odds with nonre7uiproductive sexuality of all forms.

Predestination (for which Calvinism is known) is a concept that emerged , much later, when people did something very dangerous to literalist religion: they thought about it. If you take religious literalism– born in the illogical chaos of antiquity– and bring it to its logical conclusions, funny things happen. An all-knowing and all-powerful God would, one can reason, have full knowledge and authority over every soul’s final destiny (heaven or hell). This meant that some people were pre-selected to be spiritual winners (the Elect) and the rest were refuse, born only to live through about seven decades of sin, followed by an eternity of unimaginable torture.

Perhaps surprisingly, predestination seemed to have more motivational capacity than the older, behavior-driven morality of Catholicism. Why would this be? People are loathe to believe in something as horrible as eternal damnation for themselves (even if some enjoy the thought for others) and so they will assume themselves to be Elect. But since they’re never quite sure, bad behavior will unsettle them with a creepy cognitive dissonance that is far more effective than ratiocination about punishments and rewards. The behavior-driven framework of the Catholic Church (donations in the form of indulgences often came with specific numbers of years by which time in purgatory was reduced) allows that a bad action can be cancelled out with future good actions, making the afterlife merely an extension of the “if I do this, then I get that” hedonic calculus. Calvinism introduced a fear of shame. Bad actions might be a sign of being one of those incorrigibly bad, damned people.

Calvinist predestination was not a successful meme (and even many of those who identify themselves in modern times as Calvinists have largely rejected it). “Our God is a sadistic asshole; he tortures people eternally for being born the wrong way” is not a selling point for any religion. That said, the idea of natural (as opposed to spiritual) predestination, as well as the Calvinist evolution from guilt-based (Catholicism) to shame-based (Calvinist) Christian morality, have lived on in American society.

Fundamental to the morality of capitalism is that some actors make better uses of resources than others (which is not controversial) and deserve to have more (likewise, not controversial). Applied to humans, this is generally if uneasily accepted; applied to organizations, it’s an obvious truth (no one wants to see the subsistence of inefficient, pointless institutions). Calvinism argued that one’s pre-determined status (as Elect or damned) could be ascertained from one’s actions; conservative capitalism argues that an actor’s (largely innate and naturally pre-determined) value can be ascertained by its success on the market.

Social Darwinism (which Charles Darwin vehemently rejected) gave a fully secular and scientific-sounding basis for these threads of thought, which were losing religious steam by the end of the 19th century. The idea that market mechanics and “creative destruction” ought to apply to institutions, patterns of behavior, and especially business organizations is controversial to almost no one. Incapable and obsolete organizations, whose upkeep costs have exceeded their social value, should die in order to free up room for newer ones. Where there is immense controversy is what should happen to people when they fail, economically. Should they starve to death in the streets? Should they be fed and clothed, but denied health care, as in the U.S.? Or should they be permitted a lower-middle-class existence by a welfare state, allowing them to recover and perhaps have another shot at economic success? The Social Darwinist seeks not to kill failed individuals per se, but to minimize their effect on society. It might be better to feed them than have them rebel, but allowing their medical treatment (on the public dime) is a bridge too far (if they’re sick, they can’t take up arms). It’s not about sadism per se, but effect minimization: to end their cultural and economic (and possibly physical) reproduction. It is a cold and fundamentally statist worldview. Where it dovetails with predestination is in the idea that certain innately undesirable people, damned early on if not from birth, deserve to be met with full effect minimization (e.g. long prison sentences since there is no hope of rehabilitation; persistent poverty because any resources given to them, they will waste) because any effect they have on the world will be negative. Whether they are killed, imprisoned, enslaved, or merely marginalized generally comes down to what is most convenient– and, therefore, effect-minimizing– and that is an artifact of what a society considers socially acceptable.

If we understand Calvinist predestination, and Social Darwinism as well, we can start to see a eugenic plan forming. Throughout almost all of our evolutionary history, prosperity and fecundity were correlated. Animals that won and controlled resources passed along their genes; those that couldn’t do so, died out. Social Darwinism, at the heart of the American conservative movement, believes that this process should continue in human society. More specifically, it holds to a few core tenets. First is that individual success in the market is a sign of innate personal merit. Second is that such merit is, at least partly, genetic and predetermined. Few would hold this correlation to be absolute, but the Social Darwinist considers it strong enough to act on. Third is that prosperity and fertility will, as they have over the billion years before modern civilization, necessarily correlate. The aspects of Social Darwinist policy that seem mean-spirited are justified by this third tenet: the main threat that a welfare state poses is that these poor (and, according to this theory, undesirable) people will take that money and breed. South Carolina’s Republican Lieutenant Governor, Andre Bauer, made this attitude explicit:

My grandmother was not a highly educated woman, but she told me as a small child to quit feeding stray animals. You know why? Because they breed. You’re facilitating the problem if you give an animal or a person ample food supply. They will reproduce, especially ones that don’t think too much further than that. And so what you’ve got to do is you’ve got to curtail that type of behavior. They don’t know any better.

The hydra of the American right wing has many heads. It’s got the religious Bible-thumping ones, the overtly racist ones, and the pseudoscientific and generally atheistic ones now coming out of Silicon Valley’s neckbeard right-libertarianism and the worse half of the “mens’ rights” movement. What unites them is a commitment to the idea that some people are innately inferior and should be punished by society, with that punishment ranging from the outright sadistic to the much more common effect-minimizing (marginalization) levels.

How it falls down

Social Calvinism is a repugnant ideology. Calvinistic predestination is an idea so bad that even conservative religion, for the most part, discarded it. The same scientists who discovered Darwinian evolution (as a truth of what is in nature, not of what should be in the human world) rejected Social Darwinism outright. It has also made a mockery of itself. It fails on its own terms. The most politically visible, mean-spirited, but also criminally inefficient manifestation of this psychotic ideology is in our health insurance system. Upper-middle-class, highly-educated people suffer– just as much as the poor do– from crappy health coverage. If the prescriptive intent behind a mean-spirited health policy is Social Calvinist in nature, the greed and inefficiency and mind-blowing stupidity of it affect the “undesirable” and “desirable” alike (unless one believes that only the 0.005% of the world population who can afford to self-insure are “desirable”). The healthcare fiasco is showing that a society as firmly committed to Social Calvinism as the U.S.– so committed to it that even Obama couldn’t make public-option (much less single-payer) healthcare a reality– can’t even succeed on its own terms. The economic malaise of the 2000s “lost decade” and the various morale crises erupting in the nation (Tea Party, #Occupy) only support the idea that the American social model fails both on libertarian and humanitarian terms.

Why do I argue that Social Calvinism could never work, in a civilized society? To put it plainly, it misunderstands evolution and, more to the point, reproduction (both biological and cultural). Nature’s correlation between prosperity and fecundity ended in the human world a long time ago, and economic stresses have undesirable side effects (which I’ll cover) on how people reproduce.

Let’s talk about biology; most of the ideas here also apply (and more strongly, due to the faster rate of memetic proliferation) to cultural reproduction. After the horrors justified in the name “eugenics” in the mid-20th century, no civilized society is going to start prohibiting reproduction. It’s not quite a “universal right”, but depriving people of the biological equipment necessary to reproduce is considered inhumane, and murdering children after the fact is (quite rightly) completely unacceptable. So people can reproduce, effectively, as much as they want. With birth control in the mix, most people can also reproduce as little as they want. So they have nearly total control over how much they reproduce, whether they are poor or rich. The Social Calvinist believes that the “undesirables” will react to socioeconomic punishment by curtailing reproduction. But do we see that happening? No, not really.

I mentioned Social Calvinism’s 3 core tenets above: (1) that socioeconomic prosperity correlates to personal merit, (2) that merit is at least significantly genetic in nature, and (3) that people will respond to prosperity by increasing reproduction (as if children were a “normal” consumer good) and to punishment by decreasing it. The first of these is highly debatable: desirable traits like intelligence, creativity and empathy may lead to personal success, but so does a lack of moral restraint. The people at the very top of society seem to be, for the most part, objectively undesirable– at least, in terms of their behavior (whether those negative traits are biological is less clear). The second is perhaps unpleasant as a fact (no humanitarian likes the idea that what makes a “good” or “bad” person is partially genetic) but almost certainly true. The third seems to fail us. Or, let me take a more nuanced view of it. Do people respond to economic impulses by controlling reproduction? Of course, they do; but not in the way that one might think.

First, let’s talk about economic stress. Stress can be good (“eustress”) or bad (“distress”) but in large doses, even the good kind can be focus-narrowing, if not hypomanic or even outright toxic. Rather than focusing on objective hardship or plenty, I want to examine the subjective sense of unhappiness with one’s socioeconomic position, which will determine how much stress a person experiences and which kind it is.  Likewise, economic inequality (by providing incentive for productive activity) can be for the social good– it’s clearly a motivator– but it is a source of (without directional judgment to the word) stress. The more socioeconomic inequality there is, the more of this stress society will generate. Proponents of high levels of economic inequality will argue that it serves eustress to the desirable people and institutions and distress to the less effective ones. Yet, if we focus on the subjective matter of whether an individual feels happy or distressed, I’d expect this to be untrue. People, in my observation, tend to feel rich or poor not based on where they are, economically, but by how they measure up to the expectations derived from their natural ability. A person with a 140 IQ who ends up as a subordinate, making a merely average-plus living doing uninteresting work, is judged (and will judge himself) as a failure. Even if that person has the gross resources necessary to reproduce (the baseline level required is quite low) he will be disinclined to do so, believing his economic situation to be poor and the prospects for any progeny to be dismal. On the other hand, a person with a 100 IQ who ends up with the average-plus income (as a leader, not a subordinate; but with the same income and wealth as the person with 140 above) will face life with confidence and, if having children is naturally something he wants, be inclined to start a family early, and possibly to have a large one.

What am I really saying here? I think that, while people might believe that meritocracy is a desirable social ideal, most people respond emotionally not to the component of their economic outcome derived from natural (possibly genetic) merit or hard work, but from the random noise term. People have a hard time believing that randomness is just that (hence, the amount of money spent on lottery tickets) and interpret this noise term to represent how much “society” likes them. In large part, we’re biologically programmed to be this way; most of us get more of a warm feeling from windfalls coming from people liking us than from those derived from natural merit or hard work. However, modern society is so complex that this variable can be regarded as pure noise. Why? Because we, as humans, devise social strategies to make us liked by an unknown stream of people and contexts we meet in the future, but whether the people and contexts we actually encounter (“serendipity”) match those strategies is just as random as the Brownian motion of the stock market. Then, the subjective sense of socioeconomic eustress or distress that drives the desire to reproduce comes not from personal merit (genetic or otherwise) but from something so random that it will have a correlation of 0.0 with pretty much anything.

This kills any hope that socioeconomic rewards and punishments might have a eugenic effect, because the part that people respond to on an emotional level (which drives decisions of family planning) is the component uncorrelated to the desired natural traits. There is a way to change that, but it’s barbaric. If society accepted widespread death among the poor– and, in particular, among poor children (many of whom have shown no lack of individual merit; i.e. complete innocents)– then it could recreate a pre-civilized and truly Darwinian state in which absolute prosperity (rather than relative/subjective satisfaction) has a major effect on genetic proliferation.

Now, I’ll go further. I think the evidence is strong that socioeconomic inequality has a second-order but potent dysgenic effect. Even when controlling for socioeconomic status, ethnicity, geography and all the rest, IQ scores seem to be negatively correlated with fertility. Less educated and intelligent people are reproducing more, while the people that humanity should want in its future seem to be holding off, having fewer children and waiting longer (typically, into their late 20s or early 30s) to have them. Why? I have a strong suspicion as to the reason.

Let’s be blunt about it. There are a lot of willfully ignorant, uneducated, and crass people out there, and I can’t imagine them saying, “I’m not going to have a child until I have a steady job with health benefits”. This isn’t about IQ or physical health necessarily; just about thoughtfulness and the ability to show empathy for a person who does not exist yet. Whether rich or poor, desirable people tend to be more thoughtful about their effects on other people than undesirable ones. The effect of socioeconomic stress and volatility will be to reduce the reproductive impulse among the thoughtful, future-oriented sorts of people that we want to have reproducing. It also seems to me that such stresses increase reproduction among the sorts of present-oriented, thoughtless sorts of people that we don’t as much want to be highly represented in the future.

I realize that speaking so boldly about eugenics (or dysgenic threats, as I have) is a dangerous (and often socially unacceptable) thing. To make it clear: yes, I worry about dysgenic risk. Now some of the more brazen (and, in some cases, deeply racist) eugenicists freak out about higher rates of fertility in developing (esp. non-white) countries, and I really don’t. Do I care if the people of the future look like me? Absolutely not. But it would be a shame if, 100,000 years from now, they were incapable of thinking like me. I don’t consider it likely that humanity will fall into something like Idiocracy; but I certainly think it is possible. (A more credible threat is that, over a few hundred years, societies with high economic inequality drift, genetically, in an undesirable direction, producing a change that is subtle but enough to have macroscopic effects.)

Why, at a fundamental level, does a harsher and more inequitable (and more stressful) society increase dysgenic risk? Here’s my best explanation. Evolutionary ecology discusses two reproductive pressures, r- and K-selection, in species, which correspond to optimizing for quantity versus quality of offspring. The r-strategist has lots of offspring, gives minimal paternal investment, and few will survive. An example is a frog giving birth to a hundred tadpoles. The K-strategist invests heavily in a smaller number of high-quality offspring with a much higher individual shot at surviving. Whales and elephants are K-strategists with long gestation periods and few offspring, but a lot of care given to them. Neither is “better” than the other, and they each succeed in different circumstances. The r-strategist tends to repopulate quickest after a catastrophe, while the K-strategist succeeds differentially at saturation.

It is, in fact, inaccurate to characterize highly evolved, complex life forms such as mammals as strong r- or K-selectors. As humans, we’re clearly both. We have an r-selective and a K-selective sexual drive, and one could argue that much of the human story is about the arms race between the two.

The r-selective sex drive wants promiscuity, has a strong present-orientation, and exhibits a total lack of moral restraint– it will kill, rape, or cheat to get its goo out there. The K-selective sex drive supports monogamy, is future-oriented, and values a stable and just society. It wants laws and cultivation (culture) and progress. Traditional Abrahamic religions have associated the r-drive with “evil” and sin. I wouldn’t go that far. In animals it is clearly inappropriate to put any moral weight into r- or K-selection, and it’s not clear that we should be doing that to natural urges that all people have (such as calling the r-selective component of our genetic makeup “original sin”). How people act on those is another matter. The tensions between the r- and K-drives have produced much art and philosophy, but civilization demands that people mostly follow their K-drives. While age and gender do not correlate as strongly to the r/K divide as stereotypes would insist (there are r-driven older women, and K-driven young men) it is nonetheless evident that most of society’s bad actors are those prone to the strongest r-drive: uninhibited young men, typically driven by lust, arrogance and greed. In fact, we have a clinical term for people who behave in a way that is r-optimal (or, at least, was so in the state of nature) but not socially acceptable: psychopaths. From an r-selective standpoint, psychopathy conferred an evolutionary advantage, and that’s why it’s in our genome.

Both sexual drives (r- and K-) exist in all humans, but it wasn’t until the K-drive triumphed that civilization could properly begin. In pre-monogamous societies, conflicts between men over status (because, when “alpha” men have 20 mates and low-status men have none, the stakes are much greater) were so common that between a quarter and a half of men died in positional violence with other men. Religions that mandated monogamy, or at least restrained polygamy as Islam did, were able to build lasting civilizations, while societies that accepted pre-monogamous distributions of sexual access were unable to get past the chaos of constant positional violence.

There are many who argue that the contemporary acceptance of casual sex constitutes a return to pre-monogamous behaviors. I don’t care to get far into this one, if only because I find the hand-wringing about the topic (on both sides) to be rather pointless. Do we see dysgenic patterns in the most visible casual sex markets (such as the one that occurs in typical American colleges)? Absolutely, we do. Even if we reject the idea that higher-quality people are less prone to r-driven casual sex, the way people (of both sexes) select partners in that game is visibly dysgenic. But to the biological future (culture is another matter) of the human species, that stuff is pretty harmless– thanks to birth control. This is where the religious conservative movement shoots itself in the foot; it argues that the advent of birth control created uncivil sexual behavior. In truth, bad sexual behavior is as old as dirt, has always been a part of the human world and probably always will be; the best thing for humanity is for it to be rendered non-reproductive, mitigating the dysgenic agents that brought psychopathy into our genome. (On the other hand, if human sexual behavior devolved to the state of high school or college casual sex and remained reproductive, the species would devolve into H. pickupartisticus and be kaputt within 500 years. I would short-sell the human species and buy sentient-octopus futures at that point.)

If humans have two sexual drives, it stands to reason that those drives would react differently to various circumstances. This brings to mind the relationship of each to socioeconomic stress. The r-drive is enhanced by socioeconomic stress– both eustress and distress. Eustress-driven r-sexuality is seen in the immensely powerful businessman or politician who frequents prostitutes, not because he is interested in having well-adjusted children (or even in having children at all) but to see if he can get away with it; the distress-driven r-sexuality has more of an escapist, “sex as drug”, flavor to it. In an evolutionary context, it makes sense that the r-drive should be activated by stress, since the r-drive is what enables a species to populate rapidly after an ecological catastrophe. On the other hand, the K-drive is weakened by socioeconomic stress and volatility. It doesn’t want to bring children into a future that might be miserable or dangerously unpredictable. The K-drive’s reaction to socioeconomic eustress is busyness (“I can’t have kids right now; my career’s taking off) and its reaction to distress is to reduce libido as part of a symptomatic profile very similar to depression.

The result of all of this is that, should society fall into a damaged state where socioeconomic inequality and stress are rampant, the r-drive will be more successful at pushing its way to reproduction, while the K-drive is muted. The result is that the people who will come into the future will disproportionately be the offspring of r-driven parents and couplings. Even if we reject the idea that undesirable people have stronger r-drives relative to their K-drives (although I believe that to be true) the enhanced power of the r-strategic sexual drive will influence partner selection and produce worse couplings. Over time, this presents a serious risk to the genetic health of the society.

Just as Mike Judge’s Idiocracy is more true of culture than of biology, we see the overgrown r-drive in the U.S.’s hypersexualized (but deeply unsexy) popular culture, and the degradation is happening much faster to the culture than it possibly could to our gene pool, given the relatively slow rate of biological evolution. Some wouldn’t see any correlation whatsoever between the return of the Gilded Age post-1980 and Miley Cyrus’s “twerking”, but I think that there’s a direct connection.

Conclusion

The Social Calvinism of the American right wing believes that severe socioeconomic inequality is necessary to flush the “undesirables” to the bottom, deprive them of resources, and prevent them from reproducing. Inherent to this strategy is the presumption (and a false one) that people are future-oriented and directed by the K-selective sexual drive, which is reduced by socioeconomic adversity. In reality, the more primitive (and more harmful, if it results in reproduction) r-selective sexual drive is enhanced by socioeconomic stresses.

In reality, socioeconomic volatility reduces the K-selective drive of most people, rich and poor. The reason for this is that a person’s subjective sense of satisfaction with socioeconomic status is not based on whether he or she is naturally “desirable” to society but his or her performance relative to natural ability and industry, which is a noise variable. It enhances the r-selective drive. Even if we do not accept that desirable people are more likely to have strong K-drives and weak r-drives, it is empirically true (seen in millennia of human sexual behavior) that people operating under the K-drive choose better partners than those operating under the r-drive.

The American conservative movement argues, fundamentally, that a mean-spirited society is the only way to prevent dysgenic risk. It argues, for example, that a welfare state will encourage the reproductive proliferation of undesirable people. The reality is otherwise. Thoughtful people, who look at the horrors of American healthcare and the rapid escalation of education costs, curtail reproduction even if they are objectively “genetically desirable” and their children are likely to perform well, in absolute terms. Thoughtless people, pushed by powerful r-selective sex drives, will not be reproductively discouraged, and might (in fact) be encouraged, by the stresses and volatility (but, also, by undeserved rewards) of the harsher society. Therefore, American Social Calvinism actually aggravates the very dysgenic risk that it exists to address.

Was 2013 a “lost year” for technology? Not necessarily.

The verdict seems to be in. According to the press, 2013 was just a god-awful, embarrassing, downright shameful year for the technology industry, and especially Silicon Valley.

Christopher Mims voices the prevailing sentiment here:

All in, 2013 was an embarrassment for the entire tech industry and the engine that powers it—Silicon Valley. Innovation was replaced by financial engineering, mergers and acquisitions, and evasion of regulations. Not a single breakthrough product was unveiled—and for reasons outlined below, Google Glass doesn’t count.

He continues to point out the poor performance of high-profile product launches, the abysmal behavior of the industry’s “ruling class”– venture capitalists and leading executives– and the fallout from revelations like the NSA’s Prism program. Yes, 2013 brought forth a general miasma of bad faith, shitty ideas, and creepy, neoreactionary bubble zeitgeists: Uber’s exploitative airline-style pricing and BitTulip mania are just two prominent examples.

He didn’t cover everything; presumably for space, he gave no coverage to Sean Parker’s environmental catastrophe of a wedding (and the 10,000-word rant he penned while off his meds) and its continuing environmental effects. Nor did he cover the growing social unrest in California, culminating in the blockades against “Google buses”. Nor did he mention the rash of unqualified founders and mediocre companies like Summly, Snapchat, Knewton, and Clinkle and all the bizarre work (behind the scenes, by the increasingly country-club-like cadre of leading VCs) that went into engineering successes for these otherwise nonviable firms. In Mims’s tear-down of technology for its sins, he didn’t even scratch the surface, and even with the slight coverage given, 2013 in tech looks terrible. 

So, was 2013 just a toilet of a year, utterly devoid of value? Should we be ashamed to have lived through it?

No. Because technology doesn’t fucking work that way. Even when the news is full of pissers, there’s great work being done, much of which won’t come to fruition until 2014, 2015, or even 2030. Technology, done right, is about the long game and getting rich– no, making everyone rich– slowly. (Making everyone rich is, of course, not something that will be achieved in one quarter or even one decade.) “Viral” marketing and “hockey stick” obsessions are embarrassments to us. We don’t have the interest in engineering that sort of thing, and don’t believe we have the talent to reliably make it happen– because we’re pretty sure that no one does. But we’re very good, in technology, at making things 10, or 20, or 50 percent more efficient year-on-year. Those small gains and occasional big wins amount, in the aggregate, to world economic growth at a 5% annual rate– nearly the highest that it has ever achieved.

Sure, tech’s big stories of 2013 were mostly bad. Wearing Google Glass, it turns out, makes a person look like a gigantic fucking douchebag. I don’t think that such a fact damns an entire year, though. Isn’t technology supposed to embrace risk and failure? Good-faith failure is a sign of a good thing– experimentation. (I’m still disgusted by all the bad-faith failure out there, but that should surprise no one.) The good-faith failures that occur are signs of a process that works. What about the bad-faith ones? Let’s just hope they will inspire people to fix a few problems, or just one big problem: our leadership.

On that, late 2013 seems to have been the critical point in which we, as a technical community, lost faith in the leaders of our ecosystem: the venture capitalists and corporate executives who’ve claimed for decades to be an antidote to the centuries-old tension between capitalist and laborer, and who’ve proven no better (and, in so many ways, worse) than the old-style industrialists of yore. Silicon Valley exceptionalism is disappearing as an intellectually defensible position. The Silicon Valley secessionists and Sand Hill Road neofeudalists no longer look like visionaries to us; they look like sad, out-of-touch, privileged men abusing a temporary relevance, and losing it quickly through horrendous public behavior. The sad truth about this is that it will hurt the rest of us– those who are still coming up in technology– far more than it hurts them.

This loss of faith in our gods is, however, a good thing in the long run. Individually, none of us among the top 50,000 or so technologists in the U.S. has substantial power. If one of us objects to the state of things, there are 49,999 others who can replace us. As a group, though, we set the patterns. Who made Silicon Valley? We did, forty years ago, when it was a place where no one else wanted to live. We make and, when we lose faith, we unmake.

Progress is incremental and often silent. The people who do most of the work do least of the shouting. The celebrity culture that grows up around “tech” whenever there is a bubble has, in truth, little to do with whether our society can meet the technology challenges that the 2010s, ’20s, and onward will throw at us.

None of this nonsense will matter, ten years from now. Evan Spiegel, Sean Parker, Greg Gopman, and Adria Richards are names we will not have cause to remember by December 26, 2023. The current crop of VC-funded cool kids will be a bunch of sad, middle-aged wankers drinking to remember their short bursts of relevance. But the people who’ve spent the ten years between now and then continually building will, most likely, be better off then than now. Incremental progress. Hard work. True experimentation and innovation. That’s how technology is supposed to work. Very little of this will be covered by whatever succeeds TechCrunch.

Everything that happened in technology in 2013, and much of it was distasteful, is just part of a longer-running process. It was not a wasted year. It was a hard one for morale, but sometimes a morale problem is necessary to make things better. Perhaps we will wake up to the uselessness of the corporatists who have declared themselves our leaders, and do something about that problem.

So, I say, as always: long live technology.

VC-istan 7: solving the wrong problem

I’ve written at length about VC-istan, its poor performance and its bigotries. What, however, is VC-istan’s “original sin”? Why is it so dysfunctional? Is there a foundational reason for its pattern of across-the-board moral and financial failure? I think the answer is obviously, “yes”. There’s a simple root cause: it’s solving the wrong problem. This requires two investigations: what problem should venture capital be solving, and what is it actually doing?

What’s the purpose of venture capital?

This is an easy one. The purpose of venture capital is to finance endeavors that require substantial backing in an all-or-nothing transaction. A biotechnology firm that requires $100 million to develop, and put into clinical trial, a new drug or device would be one example of this. With $10 million, it produces nothing salable; with ten times that, it has a chance. Others exist around infrastructure and in more deeply industrial pursuits like clean energy. Venture capitalists do invest in these spaces, and that’s outside of what I’d call “VC-istan”. Not everything that venture capitalists do is ugly, of course, and not all of it is VC-istan.

Venture capital, in a way, was originally intended as the “capital of last resort” for high-risk, capital-intensive businesses that would never qualify for more traditional financing. Why? Because when the proper way to invest is all-or-nothing, that has (unavoidable) negative consequences for all sides. It means that most people won’t get funded, and it’ll be extremely competitive to get capital, and dilution of founder equity will be severe. It’s not ideal, but if all you have is an idea, your product is 3 years away from the market in the best-case scenario, and you’re asking for $125 million to get started, those are the terms you have to take. It is, of course, quite a noisy process. The best ideas might not get funded, because there is literally no one able to assess what the best ideas are.

Venture capital for biotechnology and infrastructure has its own rules and culture. I’m not an expert on that, but it’s not what I consider “VC-istan”. From my perspective, which may be limited, venture capitalists in that space are trying to act in good faith and invest in viable businesses. To be blunt, I don’t think the “cool kids” nonsense (see: TechCrunch) matters so much in those sectors, because the science has to be sound. If you’re trying to turn algae into diesel fuel, Mike Arrington’s half-baked opinion of you matters a billion times less than the chemistry inside your lab.

What problem is VC-istan solving?

VC-istan is a subset of “all venture capital”, and focused on the “hip” stuff that can be likened to “reality TV”. To explain this analogy, ask this question: why are “reality TV” shows so prolific? It’s not about their quality. Cheap production is often cited, but it’s not just amount the numbers in the accounting ledger. The reality show formula is one that admits perfect commoditization. Writers and actors, at high levels of talent, resist commoditization. They won’t work on shows that are bad for their careers, and they have agents whose full-time job is to represent their interests. This makes them non-fungible. At the highest level of talent, labor may be able to push back against commoditization of itself, because there are few enough people at the highest levels to make the market discrete rather than continuous– or, in other words, illiquid. Reality TV does away with those “prima donna” creatives and celebrities: the writing demands are minimal and can be fulfilled with a mediocre staff, and the actors are nonentities. This enables the production studio to iterate quickly with half-baked concepts without needing to concern itself with the career needs of the parties involved.

VC-istan loves social media, and consumer-web marketing experiments, which are like reality television in that they can be produced with mediocre, “commodity-grade” inputs. To launch a biotech firm, one actually needs to have a strong grounding in science. Assessing founders for scientific literacy is hard, and private equity people are rarely up to the task. But any idiot can come up with, and hire someone good enough to implement, a Snapchat or a Clinkle. In the soft space of marketing experiments using technology, as opposed to the much harder sector that is technology proper, commodity founders and engineers suffice, and because the space is a gigantic bike shed, every investor feels entitled to have strong opinions. If genuine technical talent is needed for “scaling” down the road, it can be hired once the company has been covered by TechCrunch and appears legitimate.

Ultimately, the purpose of VC-istan’s “tech” companies is not to innovate or to solve hard problems. It’s to flip teams that have been validated by three to six rounds of venture funding, and possibly by success in the market (but that’s optional). Occasionally there’s an IPO, but those are about as common as big-company spinoffs. More typical is the “acqui-hire”, whose purpose can only be understood in the broader context of corporate dysfunction.

M&A has replaced R&D

A company’s need for top talent tends to be intermittent or subtle, and most often both. An example of the first (intermittent need) is around a short-term crisis that only a small percentage of people will have the insight, creativity, experience, or work ethic that is necessary to surmount it. The second pertains to the long-term existential need for innovation; if the company doesn’t have some engine that produces an occasional positive-impact black swan, it will be torn to shreds by the bad kind: crises that no amount of talent or effort can resolve. While every company pays lip service to its need for top talent, the truth is that most companies don’t need top talent for their day-to-day operations. If they did, that would be irresponsible design: a dependency on something that is somewhere between a highly volatile commodity and an outright non-commodity. The need for top talent tends to be a long-term issue.

Top talent is difficult to truly employ; one merely sponsors it. Old-style corporations understood that and invested in R&D. When the rare crisis that was truly existential would emerge, talent could be borrowed from the R&D pool. Additionally, while R&D could focus on basic research that was of general benefit to the world, and not necessarily in the firm’s immediate, parochial interests, the proximity to that research the corporation enjoyed gave it such an edge in practical innovation to pay for itself several times over.

Unfortunately, basic research was one of the first casualties of the private equity invasion that began in the 1980s. The old R&D labs that built C, Unix, Smalltalk and the internet weren’t scrapped outright, but reduced to a fraction of their former size, and forced to take a next-quarter focus. Conditions weren’t actually made so bad as to flush existing talent out, but positions became scarce enough that new talent couldn’t get in. The executives of those companies weren’t all short-sighted idiots, though. They knew that the high-autonomy, R&D-oriented work was the only thing keeping top talent in place. With corporate R&D near obliteration, that was threatened. So they knew they needed a solution that talent-intake problem. What did private iniquity propose as a solution? More private equity.

Enter venture capital, formerly a subsector of private equity that was generally avoided by those with other career options, due to its infuriatingly intermittent performance. What would it mean, however, if venture capital could be made less “venture”, by filling a need created by the disintegration of another part of the economy? Companies shutting down the blue-sky, high-autonomy R&D work had to get talent somehow. Explicitly paying for it proved to be too expensive, except in investment banking, due to hedonic adaptation– people who are performing at a high level, if their needs for autonomy are not met, require 25-50% per year raises to be content. Tapping high-talent people for managerial ranks proved fruitless as well, because many of these people (while exceptional as individual contributors) had neither the desire nor the ability to manage (and, additionally, middle-management positions were also cut during the private equity invasion). The remaining solution to the talent problem became one that private equity men found extremely attractive, given the premium they collect on deals– companies must buy it.

I don’t intend to insult the low-level employees of the Googles and Yahoos of the world by saying that those companies have “no talent” at the bottom. That’s clearly untrue. Companies don’t acqui-hire (which is far more expensive than internal promotion) because they have no top talent in their ranks. They have plenty, but they acqui-hire because they have lost the ability to discover what they have. It’s a malfunction of the middle-management layer. These companies are like hoarders that buy new coats every winter not for a lack of coats, but because their houses are so out of order that a new purchase is preferable to sorting the old place out.

Moreover, a company cannot, in general, adequately commoditize its highest levels of talent. The best will always seek their own career goals foremost, and perform at their highest only when there is coherency between their long-term personal goals and the work assigned to them. There are also, to put it bluntly, not enough such people to merit any explicit managerial correction to this problem. An executive focused on the career-coherency issues coming out of the most talented 5% is ignoring the day-to-day work completed by the other 95%. Two (problematic) solutions end up emerging. The first is for the company to ignore the high-talent problem and treat its top 5% like everyone else: closed allocation, low autonomy, etc. Then it loses them, plain and simple, and becomes dysfunctional after a few years of brain drain. The second is to leave them alone and effectively let them work on whatever they want. That’s great, in the short term, but it can be politically messy; others (who may belong in the top 5%, but haven’t been recognized) may resent them for their higher level of autonomy, or that set of people may lose sight of their need to continually market themselves and justify their favorable conditions, and then be crushed (not for a lack of value to the organization, but because it fails to market itself) when there is a management or market change.

So what is the “problem” that VC-istan exists to solve? It’s there to commoditize top talent. Although a specific company cannot commoditize its top 5%, the conceit is that an army of dedicated specialists– a mix of investors, corporate biz-dev executives, and “tech press”– can do so. In the consumer web space, venture capitalists have become a sort of high-end headhunter, but one that follows different rules.

For one major difference between the old corporate ladder and the acqui-hire system, employers are not allowed to explicitly discriminate on age, pregnancy status, health issues, race or gender. Investors can. Middle managers are too busy to conduct invasive “back channel” reference checks that, in truth, constitute civil harassment and would admit blacklisting charges if they ever interfered with employment (thus, risk-averse companies prefer not to do so). Investors can do so, and in such a way as to work through people who will keep their secrets (preventing lawsuits). This is a wet dream of the new right wing, an Uberization of executive hiring. The old system, with decades of regulation thrown into it because those rules were actually necessary, has been supplanted by a premium, rule-breaking, and vicious new one. The people who need the regulations imposed by the old system (i.e. women, minorities, people with health problems, people over 40, people with kids) are simply judged unfit to compete.

Here’s a question: how well is VC-istan actually doing, on its own terms? The first question is: what does it mean to “commoditize top talent”? While that sounds like something I might be against, I can’t actually say it’s a bad thing– not even for top-talent people. When something is commoditized, a fair price (that may fluctuate, but is fair relative to published market conditions) is established and it’s very easy to buy or sell it near that price. Currently, the compensation for top (2.0-level) engineering talent swings between about $75,000 and $10+ million per year– there is a clear uncertainty about what it is worth– with a median around $150,000. If that level of talent were adequately and fairly commoditized, that range would be more like $300,000 to $500,000– which would give most of them a hefty pay bump. The truth about the commoditization of labor is that labor generally finds it unobjectionable when the terms are fair. In fact, one effect of labor unions is to explicitly commoditize labor while attempting to ensure fairness (while professions, in general, oppose the commoditization regardless of terms). The murky issue in technology is that “top talent” is very hard to detect, because the people with the requisite skill have better things to do. Those who can, do; those who can’t, evaluate others’ work.

VC-istan, then, is built on the record-company model. Founders and engineers are treated as commodities (and generally, for reasons I won’t get into here, don’t get fair terms) but there is a hope that, thanks to the law of large numbers, top talent will be detected and validated by the outside market.

Where VC-istan went wrong is that it never figured out what top talent might look like, so the resources were thrown behind those who were either best at self-promotion or (increasingly, over time) those who could pull inherited connections. As a mechanism for detecting the rising generation’s top marketing talent, it might not be doing so bad. For picking out the best technical talent, especially as pertains to long-term R&D, it’s worse than abysmal. It’s doubtful that it’s picking up any signal at all. Companies that have a genuine need for R&D talent will be poorly served if they source it through acqui-hires.

VC-istan exists to commoditize top talent, but it has also erected a feudalistic reputation economy in which investors hold the cards. Founders hold few, engineers hold none. The highest levels of technical talent have been rendered, by this new economy, effectively irrelevant, depriving it of any leverage whatsoever. So, the terms are made bad– so bad that top engineering talent is rarely delivered. Whether this will strip VC-istan of credibility in the long run is something that remains to be seen.

The point I’ve made here is that it’s “solving” an ugly problem in a bad way.

What can venture capital do for technology?

Venture capital’s purpose is to build companies that, if successful, will become massive corporate behemoths. On a fundamental level, it’s stuck in the 20th-century mentality where a gigantic organization is the only acceptable prize for winning. Startup life is sold (by founders, and rarely by investors directly) to talented, usually clueless, engineers as an antidote to the ills of “MegaCorp” when, in truth, the explicit purpose of the VC-funded startup is to become exactly that: a new MegaCorp, but usually with crappier health benefits, longer hours, and faster firing.

What the best engineers actually tend to want is high autonomy so they can deliver exceptional work. They’d prefer ownership over it, all else being equal, but as long as they’re fairly compensated, they’re generally happy whether they work for a 20,000-person company or for themselves. When corporate R&D was sold for parts, venture-funded startups were proposed as the solution, the new way forward. Don’t like what happened to your old job? Create a new job for yourself! The lie here is that founding a VC-funded company provides the autonomy associated with true ownership. In truth, venture capitalists become full owners (de facto, if not de jure, due to the power afforded them by VC’s feudal reputation economy) of the company even when they hold a minority stake. Working for VCs is not fundamentally better than working for a boss; in many ways, it’s worse because the social distance is greater. Most bosses don’t consider themselves inherently superior based on favorable birth;

There are several critical misses that have become evident as venture capital has attempted to replace more traditional venues for innovation. One is that it has proven not to be a valid replacement for internal R&D. Nothing that VC-istan has coughed up is anywhere near the order of magnitude of Bell Labs or Microsoft Research. The second is that it has failed to be an engine of small-business generation, which is necessary for economic growth. It hasn’t connected top talent with the autonomy that comes from ownership. Rather, it has abandoned top talent in the pursuit of commodity startups run by commodity founders and commodity engineers. Over time, one might hope top talent to abandon it. That trend seems to be emerging, but I have no idea when or how (or, even at this stage, if) it will mature.

There is a fundamental technical flaw with VC-istan, additionally. That I’ll focus on, because it might lead us in the direction of a solution. If we consider the risk/reward profile of businesses, we see an underserved middle of the spectrum. Low-risk businesses can take bank loans, but those require personal liability, so it’s not wise to use them for anything that might actually fail. High-risk gambits with above-90% chances of failure, but that are capable of returning 20-50x on success, are what VCs love. The mid-risk/mid-growth space– targeting 15 to 50% annual growth, with a low but nonzero chance of business failure– is inappropriate for bank loans (too risky) but unpalatable to venture capitalists (not risky enough). Unfortunately, I don’t see an easy fix for that. Venture capital could become very profitable by funding the 15-50% range, but investment decisions aren’t driven by profits so much as the career needs of the investors. Returning a steady profit (say, 25% per year, with a bit of variance) by investing in a number of solid but moderately-sized businesses is not career-making; having been in on Facebook (even as a minor and late investor) is. The name-dropping world of Sand Hill Road cannot be expected to change, and if it does not, the focus will be less on building quality businesses and more on taking insane risks in the hope of hitting a career-making blockbuster.

This is problematic because the mid-growth/mid-risk space is exactly where true technologists live. They do not become machine learning experts or compiler gurus in an overnight episode of “virality”, and whether Mike Arrington or Paul Graham owes them a favor is irrelevant to whether they can actually code. They get good (and, if possible, rich) slowly. In terms of abstract value-added capacity, 15 to 50% per year seems to be about the natural rate (although most engineers would be thrilled to have salary increases at even half that rate). Technologists are extremely good at delivering these 20 and 40 percent per year improvements. What lies outside their interest (and, usually, their ability) is engineering the social conditions that admit 100x “viral” growth (or, far more often, abysmal failure). It’s just not where they live; they weren’t born in casinos.

The future

VC-istan is not about to die, any more than recording labels have ceased to exist. As a method of shaving 15 years off a rich kid’s corporate-ladder climb via “acqui-hire”, it will persist. As a machine that produces commodity startups run by commodity entrepreneurs, it will persist and probably be profitable for quite some time. As a way of enabling companies to discriminate on age, health, pregnancy status, and other illegal factors at upper levels (filled through acqui-hires, while rendering internal promotion rare) while keeping the discrimination off their books, it will hold that niche quite well. How relevant will VC-istan remain to true top talent? On that one, VC-istan’s lifespan may be limited. In that territory, it’s “ripe for disruption”.

So what shall be built to bring the disruption?

An alternate theory of shark-jumping

I’ve watched a fair amount of TV in my life, seen quite a few movies, read a large number of books. A theme that becomes common in creative endeavor is “jumping the shark”, or the decline in creative quality that occurs when a series (of TV seasons, or sequential movies) seems to run out of creative steam and begins “grasping” desperately at new ideas– often, ideas that are not necessarily bad but completely incoherent with the flavor of the series– as it tries to stay relevant. I’m going to address shark-jumping: why it happens, and if there is a way to prevent it.

In the abstract

There are a number of reasons why a series might decline in quality with age, a phenomenon most prominently seen in TV series with undefined length. Why does it happen? The most common explanation given is that the show’s originators “run out of ideas”, as if there were a finite supply of them that each person gets for one lifetime. I don’t think this is adequate, for two reasons. The first is that not all creative people “jump”. Some novelists run out of ideas and peak early; others keep getting better into old age. It doesn’t seem to be that common for a person to actually “run out of ideas”; some creative people become complacent once they’re lifted into upper-middle-class social acceptance (which is hard to attain for a creative person!) but that’s a change of context rather than a natural decline, and it doesn’t happen to everyone. The second is that it’s not a sufficient explanation, in light of the first point. Specific creative people can remain fresh for 15 years, no problem. But almost no fictional TV series, no matter how skilled its people, can stay fresh for that long. Most don’t keep quality for a third of that time.

In fact, the more people and money involved in a creative production, the faster the shark-jumping process– which is the opposite of what you’d expect if it were merely a problem of people running out of ideas. Novelists can stay fresh for a lifetime, while TV series tend to jump the shark after 3-6 years on average. Movies tend to jump even more quickly than that– in the first sequel, except in planned series (e.g. those that were designed to be trilogies from the outset). Magic: the Gathering (which requires a large design team) jumped, in terms of thematic quality, when I was half my current age, but Richard Garfield’s new games are still good.

This suggests strongly that shark-jumping is about teams, not individuals. That makes sense. The “idea person” might remain brilliant, but if her team is full of hacks, she’ll be inclined to stick to the tried-and-true. That’s one pattern of shark-jumping, but probably not the most common. Equally or more common is the taking of more risks, but with the new creativity feeling gimmicky and forced. When The Office jumped, it began taking more risks, but was incoherent and haphazard in doing so. When House jumped, the characters’ personal lives became more unusual. Whether more risks or fewer risks are taken, a decline in quality happens either way.

If shark-jumping is about teams, then why not fire the old team and start with an entirely fresh set of people? Most often, that will only make things worse. Even if the people on the new team are paid four times as well, and even if they’re individually quite creative, I maintain that their output will (on average) be worse than if the old team had stayed (in which case decline would still occur). As a TV or movie series matures, the set of constraints laid down upon future creativity increases. That isn’t always bad. More rigid poetic forms like the sonnet (as opposed to free verse) often encourage creativity because the poet has to spend a lot more time thinking about words, and how they sound and flow together, than in typical prose. The same, I think, goes with serial creative work. The increasing constraint load, for some time, actually improves the product. In TV, Season 2 is typically better than Season 1. There is a point, however, when those constraints become a burden. Reasonable avenues of exploration become fewer as the story moves along. That’s not unnatural. In drama, we see that in the tragic arc: eventually, the protagonist reaches a point where the only remaining option is to surrender to the forces that have multiplied against him; the mortal dies, the gods win. In a television series intent on prolonging its life, however, this results in increasingly ridiculous ploys to get the main characters out of whatever final state– whether a positive one like marriage for a lothario, or a negative one like imprisonment or terminal illness– they’ve arrived at. This should also explain why series designed with a finite life in mind (such as Breaking Bad) rarely jump the shark. They’re programmed to end before that would happen.

As much as shark-jumping is about the increasing constraint load and the inverted-U shape of its effect on creative output, it’s also about people. Would the same calibre of people sign up to work on Inception II as worked on the original? I doubt it. It’d be possible to get good people, yes, but the best people would prefer to work on something more original than a sequel. You’d get more people who are there to burnish their resumes and fewer who are there to do the best creative work of their lives. Mature brands tend to draw people in with careerist rather than creative impulses: ability to lead a large group, attach one’s name to a known entity, etc. The average credibility (in terms of on-paper accomplishment and social status) goes up as the brand matures, and this might also improve the mean output, but it reduces variance. Thus, peak creative output is almost always lower in the brand’s later phases.

Therefore, a “fire the old team” strategy is likely to accelerate the shark-jumping problem, which is about the type of team that a series will attract more than the individuals themselves. The old-timers who had the vision are gone, and they’ve been replaced by people who are on the job for careerist reasons. In general, I’d say there’s nothing wrong with this– most people take most jobs for careerist reasons– but it’s not conducive to the highest levels of creative output. If there are still a couple of clever ways, for a series, out of no-credible-options-left shark-jump territory, a fresh team of mercenaries is not likely to find it. They’re likely to barge through walls, strain credibility, and make shark-jumping palpable in the final product.

It’s not that people “run out of ideas”. They don’t. Teams, however, lose members and gain new ones constantly. That’s inevitable. And if there’s one thing I’ll say confidently about creative people as an entire set, it’s that we’re intermittent. Something like a TV series requiring 600 minutes of show time (using the industry-standard 100:1 multiplier, that’s 1000 hours of production time) requires a creative team, because even the best individuals can’t hit every note right over that duration without some help. So, at least in television, even the best of visionary creators needs the support of (and challenges from) a strong team to keep going. And no matter what, that team will evolve in a direction that’s likely to be sharkward. The new team might be paid four times as much as the old one but, by Season 7, almost no one’s focus is on the work in front of them. Rather, they’re more interested in Season 1 of their next project, where they’ll have more input and opportunity to shine. This career incoherency (disparity between what’s good for their jobs vs. their careers) doesn’t actually cause them to “run out of ideas”. More often, it’s the reverse. They (probably subconsciously, for the most part) take risks that may confer personal career benefits, but that go against the grain of what the series is really about.

In software

That this applies to software, also, should not surprise anyone. Like a television series, software is designed with an indefinite lifespan in mind. There is somewhat of a difference, which is that software doesn’t always jump the shark. In the open-source world, it’s far less likely to do so. However, I think that commercial software obeys the same principle of shark gravity. After a certain point, a corporate software module will be in maintenance mode and struggle to attract a high calibre of people.

There are people who will hack the Linux Kernel or Postgres or Clojure because they use those products and care about them deeply. Open-source software is, in truth, a brilliant solution to the career-coherency problem: people can benefit their careers and add value to the world. Such software can jump the shark, but I don’t think it’s guaranteed to do so, and the best software products seem never to jump. There are career benefits to maintaining a respected open-source product, and the fact that the maintainer is also typically a user (and, therefore, aware of existing context) prevents the bad creative risks for which post-shark teams are known.

In-house or commercial software, on the other hand, seems always to jump. Within most companies, however, the career payoff of maintenance work is almost invariably inferior to that of new invention. Open-source software solves the career coherency problem, but internal products almost never become respected enough for that to happen. Software’s shark-jumping dynamic is, in many ways, actually more severe than that of a TV series. In television, the people who join a mature series aren’t necessarily less qualified or worse at their jobs– they have different objectives that are less conducive to doing their best work, but they’re not across-the-board less qualified people. In closed-allocation software companies, however, maintenance usually becomes the ghetto for people who can’t fight their way to something meatier, and very few people who are any good will stay with it for very long.

Rarely, if ever, is a closed-allocation software company able to solve this problem. When the company recognizes that a legacy module is important, it will commit resources to its upkeep, usually in two ways. The first is to increase headcount, and the second is to increase salaries of the people doing the work. On the first, that tends to attract less capable people for two reasons. The incompetent like larger teams because they can “hide within the herd”. But this applies to middle managers as well as front-line workers. Mediocre managers also prefer large teams because it inflates their headcount statistics; they’re more likely to make Director if they can say they had ten reports than if they had three. Good managers generally want to lead teams of high average competence and achieve something tangible; mediocre and bad managers usually want to lead large teams (with minimal concern over whether they get good reports or bad) to improve their stats. So that first solution fails to have the desired effect. What about the second, which is increasing the pay of the maintenance staff? That rarely works, either. The truth is that a savvy, capable software engineer can’t be motivated to do career-incoherent work with a one-time 20 percent– or even 50 percent– bonus. The opportunity cost for her (in not doing work that will advance her career) is too great. She might be appeased with a permanent 30% salary bump, for a year or two, but then that will become “the new normal” for her compensation and she’ll need another bump. But HR is not about to let the salary for a lowly “Software Engineer III” to go that far out of band, and promoting her (to compensate for unpleasant work, regardless of whether she meets the typical criteria for promotion) will often annoy engineers who will (accurately) perceive the promotion as “political”. Even if engineers abstractly agree that undesirable work deserves a reward, they’ll usually oppose a promotion (especially if it is over them) that appears to be given for doing grunt work that is (and because it is) unpleasant rather than technically impressive. So that’s untenable, too. How does the typical closed-allocation software company solve that maintenance problem? The rewards generally all go to the “heroic” middle manager (who usually takes the project on for a second or third chance, after failing at new invention) for “rescuing” the ailing legacy module. In the large closed-allocation software titans, these awards (again, to the managers of the maintenance projects, and rarely to the teams) can reach six or seven figures. The peons get nothing; they’re just doing their jobs and, in the typical closed-allocation hellhole, their managers can easily prevent them from having other options.

That the above doesn’t work, at all, shouldn’t surprise anyone. I’ve already said a lot about that topic here and here, so I won’t lengthen this particular essay by reviewing it.

In sum, shark-jumping (whether in television, or in software) occurs because of two reasons, neither of which requires an individual to “run out of ideas” (we know that that doesn’t always happen). The first pertains to the constraints imposed by the project’s history. At first, constraint is conducive to superior creativity– that’s why most poets are better in rigid forms than in free verse– but, at some point, the complexity load gets to a point where high-quality options have been exhausted. The second, and probably more inexorable, factor is the change in team dynamics. As a brand matures or a software module goes into maintenance mode, the evolution in the motivational profile (that is, why the team is there) is enough to bring on the shark.

What is the solution? For television, the best solution seems to be to let the narrative arc tend toward its natural close– and not to senselessly prolong the life of the series. Breaking Bad did that and never jumped, but with another season, it probably wouldn’t have been as good. Software doesn’t have that option, because it’s designed to be infrastructural in nature. It should mature to a point where it “just works” from the perspective of 99+ percent of those who interact with it. The issue is that someone will have to maintain it. In software, the only incentive system that seems to work– i.e. the only one that can solve the otherwise-crippling career-coherency issues of maintenance work– is the open-source economy.