What Ayn Rand got right and wrong

Ayn Rand is a polarizing figure, and it should be pretty clear that I’m not her biggest fan. I find her views on gender repulsive and her metaphysics laughable. I tend to be on the economic left; she heads to the far right. She and I have one crucial thing in common– extreme political passions rooted in emotionally damaging battles with militant mediocrity– but our conclusions are very different. Her nemesis was authoritarian leftism; mine is corporate capitalism. Of course, an evolved mind in 2013 will recognize that, while both of these forces are evil, there isn’t an either/or dichotomy between them. We don’t need authoritarian leftism or corporate capitalism, and both deserve to be reject out of hand.

What did Rand get right?

As much as I dislike Ayn Rand’s worldview, it’s hard to say that it isn’t a charismatic one, which explains her legions of acolytes. There are a few things she got right, and in a way that few people had the courage to espouse. Namely, she depicted authoritarianism as a process through which the weak (which she likened to vermin) gang up on, and destroy, the strong. She understood the fundamental human problem of her (and our) time: militant mediocrity.

Parasitism, in my view, isn’t such a bad thing. (I probably disagree with Rand on that.) After all, each of us spends nine months as a literal biological parasite. I am actually perfectly fine with much of humanity persisting in a “parasitic” lifestyle wherein they receive more sustenance from society than they would earn on the market. I’m fine with that. It’s a small cost to society, and the long-term benefits (especially including the ability for some people to escape parasitism and become productive) outweigh it. What angers me is when the parasites on the opposite end (the high one) of the socioeconomic spectrum behave as if their fortune and social connections entitle them to tell their intellectual superiors (most viscerally, when that intellectual superior is me) what to do.

Rand’s view was harsh and far from democratic. She conceived of humanity consisting of a small set of “people of the mind” and a much larger set of parasitic mediocrities. In her mind, there was no distinction between (a) average people, who neither stand out in terms of accomplishment or militancy, and (b) the aggressive, anti-intellectual, and authoritarian true parasites against which society must continually defend itself. That was strike one: it just seemed bitchy and mean-spirited to decry the majority of humanity as worthless. (I can’t stand with her on that, either. We’re all mediocre most of the time; it’s militant mediocrity that’s our adversary.) Yet most good ideas seem radical when first voiced, and their proponents are invariably first attacked for their tone and attitude rather than substance, a dynamic that means “bitchiness” is often positively correlated with quality of ideas. I think much of why Rand’s philosophy caught on is that it was so socially unacceptable in the era or the American Middle Class; and intellectuals understand all too well that great ideas often begin as rejected ones.

To understand Ayn Rand further, keep in mind the context of the time during which she rose to fame: the American post-war period. Even the good kinds of greed were socially unacceptable. So a lot of people found her “new elitism” (which was a dressing-up of the old kind) to be refreshing and– in a world that tried to make reality look very different from what it was (see: 1950s television)– honest. By 1980, there was a strong current of opinion that the inclusive capitalism and corporate paternalism had failed, and elitism became sexy again.

Where was the value in this very ugly (but charismatic) philosophy? I’d say that there are a few things Ayn Rand got completely right, as proven by experience at the forefront of software technology:

  1. Most progress comes from a small set of people. Pareto’s “80/20″ is far too generous. It’s more like 80/3. In programming, we call this the “10x” effect, because good programmers are 10 times as effective as average ones (and the top software engineers are 10 times as effective as the merely-good ones like me). Speaking on the specific case of software, it’s pretty clear that 10x is not driven by talent alone. That’s a factor, but a small one. More relevant are work ethic, experience, project/person fit, and team synergies. There isn’t a “10x programmer” gene out there; a number of things come into play. It’s not always the same people who are “10x-ers”, and this “10x” superiority is far from intrinsic to the person, having as much to do with circumstance. That said, there are 10x differences in effectiveness all over the place when at the forefront.
  2. Humanity is plagued by authoritarian mediocrity. If you excel, you become a target. It is not true that the entire rest of humanity will despise you for being exceptionally intelligent, creative, industrious, or effective. In fact, many people will support you. However, there are some (especially in positions of power, who must maintain them) who harbor jealous hatred, and they tend to focus on a small number of people. In authoritarian leftism, they attack those who have economic success. In corporate capitalism, they attack their intellectual superiors.
  3. Social consensus is often driven by the mediocre. The excellent have a tendency to do first and sell later. Left to their own devices, they’d rather build something great and seek forgiveness than try to get permission, which will never come if sought at the front door. The mediocre, on the other hand, generate no new ideas and therefore have never felt that irresistible desire to take that kind of social risk. They quickly learn a different set of skills: how to figure out who’s influential and who’s ignored, what the influential people want, and how to make their own self-serving conceptions (which are never far-fetched, being only designed to advance the proponent, because there is otherwise no idea in them) seem like the objective common consensus.

A bit of context

Ayn Rand’s view of authoritarian leftism was spot-on. Much of that movement’s brutality was rooted in a jealous hatred that we know as militant mediocrity. Its failure to become anything like true communism (or even successful leftism) proved this. Militant mediocrity is blindly leftist when poor and out-of-power and rabidly conservative when rich and established. Of course, in the Soviet case, it never became “rich” so much as it made everyone poor. This enabled it to keep a leftish veneer even as it became reactionary.

Rand’s experiences with toxic leftism were so damaging that when she came to the United States, she continued to advance her philosophy of extreme egoism. This dovetailed with the story of the American social elite. Circa 1960, they felt themselves to be a humiliated set of people. Before 1930, they lived in elaborate mansions and lived opulent, sophisticated lifestyles. After the Great Depression, which they caused, they fell into fear and reservation; that is why, to this day, the “old money” rich prefer to live in houses not visible from the road. They remained quite wealthy but, socially, they retreated. They were no longer the darlings at the ball, because there was no ball. It wasn’t until their grandchildren’s generation came forward that they had the audacity to reassert themselves.

While this society’s parasitic elite was in social exile, paternalistic, pay-it-forward capitalism (“Theory Y”) replaced the old, meaner industrial elite, and the existing upper class found themselves increasingly de-fanged as the social distance between them and the rising middle class shrunk. It was around 1980 that they began to fight back with a force that society couldn’t ignore. The failed, impractical Boomer revolutions of the late 1960s were met, about 10 to 15 years later, with a far more effective “yuppie” counterrevolution that won. Randism became its guiding philosophy. And, boy, did it prove to be wrong about many things.

What did Rand get wrong?

Ayn Rand died in 1982, before she was able to see any of her ideas in implementation. Her vision was of the individual capitalist as heroic and excellent. What we got, instead, was these guys.

Ayn Rand interpreted capitalism using a nostalgic view of industrial capitalism, when it was already well into its decline. The alpha-male she imagined running a large industrial operation no longer existed; the frontier had closed, and the easy wins available to risk-seeking but rational egoists (as opposed to social-climbing bureaucrats) had already been taken. The world was in full swing to corporate capitalism, which has been taking an increasingly collectivist character on for the past forty years.

Corporatism turns out to have the worst of both systems between capitalism and socialism. Transportation, in 2013, is a perfect microcosm of this. Ticket prices are volatile and fare-setting strategies are clearly exploitative– the worst of capitalism– while service rendered is of the quality you might expect from a disengaged socialist bureaucracy; flying an airplane today is certainly not the experience one would get from a triumphant capitalistic enterprise.

Suburbia also has a “worst of both worlds” flavor, but of a more vicious nature, being more obvious in how it merges two formerly separate patterns of life to benefit one class of people and harm another. By the peak of U.S. suburbanization, almost everyone (rich and poor) lived in a suburb, and this was deemed the essence of middle-class life. Suburbia is well-understood as a combination of urban and rural life– an opportunity for people to hold high-paying urban jobs, but live in more spacious rural settings. What’s missed is that, for the rich, it combines the best of both lifestyles– it gives them social access, but protects them from urban life’s negatives; for the poor, it holds the worst of both– urban crime and violence, rural isolation.

This brings us directly to the true nature of corporate capitalism. It’s not really about “making money”. Old-style industrial capitalism was about the multiplication of resources (conveniently measured in dollar amounts). New-style corporate capitalism is about social relationships (many of those being overtly extortive) and “connections”. It’s about providing the best of two systems– capitalism and socialism– for a well-connected elite. They get the outsized profit opportunities (“performance” bonuses during favorable market trends that should more honestly be appreciated as luck) of capitalism, but the cushy assured favoritism and placement (acq-hires and “entrepreneur-in-residence” gigs) of socialism. Everyone else is stuck with the worst of both systems: a rigged and conformist corporate capitalism that will gladly punish them for failure, but that will retard their successes via its continual demands for social permission.

What’s ultimately fatal to Rand’s ideology– and she did not live long enough to see it play out this way– was the fact that the entrepreneurial alpha males she was so in love with (and who probably never existed, in the form she imagined) never came back. In the 1980s, the world was sold to emasculated, influence-peddling, social-climbing private-sector bureaucrats, and not heroic industrialists. Whoops!

What we now have is a world that claims to be (and is) capitalistic, but is run by the sorts of parasitic, denial-focused, militantly mediocre position-holders that Rand railed against. This establishes her ideology as a failed one, and the elitism-is-cool-again “yuppie” counterrevolution of the 1980s has thus been shown to be just as impractical and vacuous as the 1960s “hippie” movement and the authoritarian leftism of the “Weathermen”. Unfortunately, it was a far more effective– and, thus, more damaging– one, and we’ll probably be spending the next 15 years cleaning up its messes.

One-month break from Hacker News.

I don’t have much use for TechCrunch– it’s symptomatic of the many things that are wrong with this weird advanced-marketing industry, hijacked by fired/disgraced finance guys who’ve reinvented themselves as “startup founders”, that we still call “tech”– but I read this article, posted today, about Hacker News. Here’s the piece that struck me:

One of Graham’s biggest pain points is the “schoolyard quarrels” he finds on the site on a daily basis, and wishes “users would stop misbehaving.” He cites the example of users organizing voting rings to purposefully vote up stories, which caused Graham to develop additional software to detect this. He adds that more users are trolling under newly created accounts, and are deliberately starting flame wars on the site.

“I wish I could get people to stop posting comments that are stupid or mean,” he says. “It takes only one or two negative comments and a discussion turns into a flame war.”

Graham adds that he gets a lot of vitriol from users personally with accusations of bias or censoring. He clarifies that he, and the other human editor, rarely take links down unless they are dupes. Even with tabloid or gossip stories that surface, Graham will not take them down. Users with high karma points tend to flag these stories, he adds, and they can then be taken down.

“Hacker News makes me sad a lot,” says Graham. “I wish the community would behave the way they did when it was a little village.”

I think I am, mostly, a good contributor to Hacker News, but there’s been a decline in the quality of my posts lately. Maybe I’m part of that problem. Perhaps it would be good for me to take a hiatus.

I have an anger problem. It’s made worse by the fact that most of the things that anger me genuinely deserve to be hated. That makes me right in opposing them. The software industry is in a fucked-up state and we (the technologists who should be running it, instead of the smooth-talking assholes who don’t love– or even understand the first thing about– technology, problem-solving, or code) ought to stop letting ourselves be a conquered people. All that is true. I am fighting a good fight. But do I need to fight it all the time? I’m not sure, and certainly I should not inject so much anger with such frequency into one of the best discussion forums currently on the Internet.

I’m taking a break. One month, and then I’ll decide what to do from there.

Corporate “Work” is the most elaborate victim-shaming structure ever devised.

There are a lot of things to hate about the institutional pogrom that the middle and working classes must suffer in the name of Work. It’s not, of course, the actual work (i.e. productive activity) that is so bad. That’s often the best part of it! At any rate, work demands at Work are pretty light. The work itself– when you’re lucky enough to actually get a real project– is the fun bit. It’s the private-sector social climbing and subordination and the pervasive and extreme narcissism of the unethical assholes who are in charge that makes it such hell. There’s a specific economic reason why it’s so horrible, and a simple enough one that I’ll be able to mention it on the way to my main topic (victim shaming). I’ll cover the economics first, and then progress to the sociological victim-shaming problems.

Why Work is starting to fail

In 2013, ignoring technological change is not an option. It affects everything. No one’s job will be the same in 20 years as it is now– and that’s a good thing. However, it is dangerous. Broadly speaking, there seem to be two schools of thought on the labor market’s predicted response to enhanced efficiency, global labor pools, and automation of work. They are labor finitism and labor progressivism.

Labor finitism is the idea that there’s a fixed pool (“lump of labor”) of work that society has decided that it is willing to pay for. If labor finitism is accurate, then technological improvements only make the situation worse for the proles: they now have to compete harder for a shrinking pool of available work. If labor finitism is true, then trade protectionism and xenophobia become necessary. Unfortunately, labor finitism means that that technological advancements will destroy the middle and working classes, as their jobs disappear forever, and they are deprived of the resources that would enable to compete for the dwindling supply of high-quality jobs.

Labor progressivism is the more utopian alternative in which the enhanced capability brought in by technology gives leverage to the workers, and rather than automating them out of jobs, enhances their capability. Rather than being pushed out of the workforce, they’re able to do more interesting stuff with their time, add more value, and therefore be better off in all ways (higher quality of work, better compensation). Labor progressivism is the favored stance of Silicon Valley technologists, but unfortunately doesn’t accurately represent the reality faced by middle-class Americans.

Which of these two opposing stances is right? Well, the actual state of society is somewhere between the two extremes, obviously. It’s a mix of the two. Labor finitism seems to be more true in the short term, while the long-term economic evolution has a progressivist feel; there seems to eventual consistency in the system, but it takes a long time for things to get to an acceptable state, and people need to eat immediately. As Keynes said, in the long run we are all dead. If we can make the convergence to labor progressivism happen faster we should.

Here’s what I’ve observed. For pre-defined, subordinate wage/salaryman work, labor finitism is correct. The jobs that built the middle classes– complacent, entrepreneurially averse, inclined to overspend rather than plan for eventual freedom– of Western societies are going away, and this is happening at a rapid pace, leaving a large number of people (who were effectively farmed by a savvier elite, but are now unneeded livestock) just screwed. However, for those who have the resources to own their lives rather than renting their existences from a boss, there’s an infinite (i.e. labor progressivism) amount of useful work to be done: building businesses and apps, freelance travel writing, building skills and trying out radically different careers. Payoffs for such work are intermittent, and discovery costs are high– your first few attempts to “break out” will typically be money-losers– but those who have the resources to stomach the income volatility have access to a much higher-quality pool of work that is not going to be automated away in the next year.

So labor finitism and progressivism are both correct to some degree, the question of which is more in force depending on one’s present resources. Those who can stomach short-term income volatility live in a labor-progressivist world. For the 99%, however, labor finitism is more accurate.

So what is corporate Work?

Corporate Work is the labor-finitist ghetto left for “the 99%”, those of us who haven’t had the luck or resources to escape into the labor-progressivist stratosphere. It’s a zero-sum world. If you get 5 times as good as you currently are at your job, then 4 people sitting near you lose their jobs. There’s only a small amount of potential work that can be performed in good graces by the company, and that work-definition function is performed by a small set of incompetent high priests called “executives” whose informational surface area is too small for them to do it well, and who tend to use their social power and credibility for corrupt and extortive purposes, rather than advancing the good of the company.

What inevitably comes out of this is that there’s very little high-quality work available in the corporation. There are plenty more things that could be done and would be useful to it, but people who invested time in those would get into trouble for blowing off their assigned projects. So, while “reality” for a theoretically profit-maximizing company might be labor-progressivist (there’s an unending stream of improvements the firm might make that would render it more profitable) the issue of executive sanction (i.e. you can only keep your job by working on the pre-defined, often uninspiring, stuff) creates a labor-finitist atmosphere in which most time is spent squabbling over the few high-quality projects that exist.

Indeed, this is the most painful thing about corporate Work. It’s a lifestyle based not on doing work, but on getting it. Excellence doesn’t matter. Only social access does. It’s all a bunch of degenerate social climbing that has nothing to do with excellence or addition of value. It’s a world run by con artists who steal the trust of powerful people; those who are busy actually trying to excel at things (i.e. actually working) never develop the social polish or credibility necessary to do that, so they end up being marginalized.

Paul Graham wrote about this, and he got some details right and some wrong, in the essay “Why Nerds Are Unpopular“. It’s worth reading in its entirety, because while Graham gets some of the finer points wrong (and I’ll discuss that) he’s extremely insightful and articulate overall.

Graham compared the stereotypically negative depiction of high school (a cruel society governed by arbitrary dominance hierarchies and all-consuming conformity, existing because there isn’t meaningful work for 17-year-olds to do) to “the adult world” as a rich man (one who truly owns his life, rather than renting it from a boss) would perceive it– a place where there’s actual work to be done, and the intermittency of real work’s rewards is tolerable because of one’s financial status.

Here are two direct quotes to show what Graham’s talking about:

In almost any group of people you’ll find hierarchy. When groups of adults form in the real world, it’s generally for some common purpose, and the leaders end up being those who are best at it. The problem with most schools is, they have no purpose. But hierarchy there must be. And so the kids make one out of nothing.

We have a phrase to describe what happens when rankings have to be created without any meaningful criteria. We say that the situation degenerates into a popularity contest. And that’s exactly what happens in most American schools. Instead of depending on some real test, one’s rank depends mostly on one’s ability to increase one’s rank. It’s like the court of Louis XIV. There is no external opponent, so the kids become one another’s opponents.

When there is some real external test of skill, it isn’t painful to be at the bottom of the hierarchy. A rookie on a football team doesn’t resent the skill of the veteran; he hopes to be like him one day and is happy to have the chance to learn from him. The veteran may in turn feel a sense of noblesse oblige. And most importantly, their status depends on how well they do against opponents, not on whether they can push the other down.

Court hierarchies are another thing entirely. This type of society debases anyone who enters it. There is neither admiration at the bottom, nor noblesse oblige at the top. It’s kill or be killed.

This is the sort of society that gets created in American secondary schools. And it happens because these schools have no real purpose beyond keeping the kids all in one place for a certain number of hours each day. What I didn’t realize at the time, and in fact didn’t realize till very recently, is that the twin horrors of school life, the cruelty and the boredom, both have the same cause.

Paul Graham depicts the American suburban high school as being a society that turns to cruelty because, with the lack of high-impact, real-world work to be done, people create a vicious status hierarchy based entirely on rank’s ability and drive to perpetuate itself. He also establishes that similar meaningless hierarchies form in prisons and among idle upper classes (“ladies-who-lunch”) and that the pattern of positional cruelty is similar. Here is, I think, where he departs a bit from reality, taking fortunate personal experience to be far more representative of “the real world” than it actually is:

Why is the real world more hospitable to nerds? It might seem that the answer is simply that it’s populated by adults, who are too mature to pick on one another. But I don’t think this is true. Adults in prison certainly pick on one another. And so, apparently, do society wives; in some parts of Manhattan, life for women sounds like a continuation of high school, with all the same petty intrigues.

I think the important thing about the real world is not that it’s populated by adults, but that it’s very large, and the things you do have real effects. That’s what school, prison, and ladies-who-lunch all lack. The inhabitants of all those worlds are trapped in little bubbles where nothing they do can have more than a local effect. Naturally these societies degenerate into savagery. They have no function for their form to follow.

When the things you do have real effects, it’s no longer enough just to be pleasing. It starts to be important to get the right answers, and that’s where nerds show to advantage. Bill Gates will of course come to mind. Though notoriously lacking in social skills, he gets the right answers, at least as measured in revenue.

So, apparently, it gets better if you have the resources to pursue work that has meaning, rather than the subordinate people-pleasing nonsense associated with high school (and, as it were, most corporate jobs). That’s what it’s like if you’re rich enough to escape corporate hell for good. Getting the right answers, rather than pleasing the right people, becomes important. If you have a typical please-your-boss subordinate position, though… guess what? Paul Graham’s depiction of high school is exactly what you’ll face in the supposedly “adult” world. The boredom and cruelty don’t end. You just get older and sicker and less able to handle it, until you’re discarded by that world and it’s called “retirement”.

The hellish social arrangement that Graham describes is the result of labor finitism; imposed artificially by the testability needs (i.e. have everyone doing the same work) of school and also the degraded economy of a prison (people intentionally separated from society, often because of psychological or moral defects) or idle ladies-who-lunch (who live in comfort, but have no power). People group together, but the lack of real work means that there’s a lot of squabbling for status. In a labor-finitist world, you have zero-sum internal competition and a social-status hierarchy that subverts any meritocracy that one might try to impose. High school students are “supposed” to care about grades and learning and doing good work; but most of them actually care more about in-group social status. That turns out to be great preparation for the corporate world, in which “performance” reviews reflect (and perpetuate) social status rather than having anything to do with the quality of a person’s work. (People who do actual work don’t get “reviewed” or, if they do, it’s a rubber-stamp formality; everyone is too busy actually doing things.) Work is a world in which grades are assigned not by teachers but by whatever group of kids happens to be popular at the time.

What Paul Graham describes as “the adult world” is what life looks like from his fortunate position. I won’t use “privileged” here– Graham’s brilliant and clearly earned every bit of his success– but it’s not typical for most people. If you have the money to own your life instead of renting from a corporate master, then labor progressivism (i.e., what “adulthood” is supposed to be, a lifestyle based on providing value to others rather than subordinating to a parochial protection-offerer called a corporate manager) is what the world actually looks like. The big question for us in technology is: how do we make a progressivist/high-autonomy world available to more people?

Trust

The biggest problem for technologists is trust. Free-floating, high-return-seeking capital is abundant in the world, but the gatekeepers (venture capitalists who’ve used internal social protocols to form an almost certainly illegal collusive phalanx, despite nominally being in competition with each other) have made it scarce. Talent finds capital to be inaccessible. Yet (somewhat insultingly) the corporate managers consistently complain about a “talent shortage”. So capital complains, with equal fervor, about a struggle to find and retain talent. Everyone’s wrong. Capital isn’t scarce, nor is talent. Both are abundant, and there’s something else keeping the two from meeting. The problem is that there’s a bilateral lack of trust.

Why do companies “acq-hire” such depressingly mediocre talent at a panic price of $3-6 million per software engineer? It’s because the value of a trusted employee is worth 10-100 times that of a typical one. So why don’t these companies, instead of shelling out billions to acq-hire mediocrity, simply trust their own people more? Well, that’s a deep sociological question that I won’t be able to explore fully. The short answer is that the modern corporation’s labor finitism (driven by closed allocation, or the right of workers only to work on projects with pre-existing executive sanction and middle-management protection) creates a society exactly like Graham’s vision of high school, which means that nasty political intrigues form and petty hatreds build in the organization, to the point that outsiders are deemed a better bet for allocation to real work than internal people, the latter having been tainted by years of inmate life. Corporate society is so dismal and mediocre and so removed from getting actual work done that people who participate in it (as opposed to the fresh-faced rich kids whose parental connections bought them VC and tech press and favorable terms of “talent acquisition”) are, even if reality is not such, perceived as being too filthy to be trusted with real assignments.

It’s not a pretty picture. Corporations hire people on the false pretense of mentorship and career development. “Yes, we’ll pay you a pittance now, and you’ll spend your first two years on the garbage work that no one wants to do; but we’ll advance your career and make you eligible for much better jobs in the future.” What they do is the opposite. They don’t reward people who “suck it up” and do years of shit-work with better projects in the future, because a person who spent two years not learning anything is less eligible for quality work than when he came in the door; they hire people with better work experience for that. Also, it’s not that they deliberately lie to hire people. It’s that they just don’t have much high-quality work to allocate, and few companies are courageous enough to try Valve-style open allocation. So what actually emerges is a society in which high-quality work either goes to political strongmen (i.e. extortionist thugs who intimidate others into supporting their own campaigns for high social status) or to outsiders that are usually either acq-hired in or started in privilged positions by investor mandate (i.e. a venture capitalist uses your company to mint executive sinecures for his underachieving friends).

Victim-shaming

How does all of this evolve into an elaborate system of victim-shaming? Well, it works like this. People are evaluated, in the world of Work, based on their previous experiences. However, because of the corruption in project allocation, a person’s “work experience” is actually a time series of his political favor, not his level of accomplishment. What that means is that people who fall out of favor are judged to be underperformers and flushed away. There is no more brazen culture of victim-shaming than the private-sector social-climbing hell we call Work.

The rules are clear: if you get robbed, it’s your fault. If your boss steals from you by abusing process, giving you undesirable work ill-matched to your talents, and ruining your career, it’s because you (not he) are a subhuman piece of shit. You deserve whatever he does to you, and he has the right to do it (that’s the perk of being a manager). He proved he was stronger, by robbing you and you letting him (as if you had a choice) and he therefore deserves everything he stole from you. You deserve nothing other than more humiliation. That’s what resumes are for: to create a distributed, global social-status hierarchy based on a person’s political-favor trajectory. That’s why job titles and dates matter but accomplishments don’t. It’s not about what you achieved; it’s about whether people saw you as threatening and strong (and gave you impressive titles) or as weak (and robbed you).

I hope the causes of ethical bankruptcy in Corporate America are visible, by now. A world in which thieves win (they stole it, thus they earned it) while the victims are treated like subhuman garbage is one in which almost no one can afford to be ethical (although the most successful people invest considerable energy in appearing ethical). For example, some people lie on their resumes. I don’t, but that’s for cosmetic reasons. While my work experience isn’t at the quality level that I deserved, it’s high enough that I prefer the complexity-reduction (a cosmetic concern) of sticking to the truth rather than the gains associated with a status-inflating lie that might fail and lower my status more than telling the truth would. I, personally, don’t lie; but I fully support those who do. They are being more ethical than the system they are deceiving. They express their honesty– in the form of contempt for an evil system–through deceiving that system’s officers.

The corporate culture we call “Work” is one where people are kicked when they’re down. It has no morals or ethics and there’s no point in pretending otherwise– not at this point, at least. When people lie on their resumes in harmless ways (quack doctors are criminals and should be jailed; but there’s no good reason to feel anything negative toward someone who self-assigns the title “VP” when he was a mere Director) I support them. Defective structures must be burned down, and I support the barbarians at the gate. If those who’ve been robbed for generations respond by stealing back what was taken from them, I think that it’s the best thing that can happen from here.

Why an Atlas Shrugged smart people strike would never work.

I’m not a major fan of Ayn Rand, but one of the more attractive ideas coming out of her work is from Atlas Shrugged, written about a world in which the “people of the mind”– business leaders, artists, philosophers– go on strike. It’s an attractive idea. What would happen if those of us in the “cognitive 1 percent” decided, as a bloc, to secede from the mediocrity of Corporate America? Would we finally get our due? Would we stop having to answer to idiots? Would the dumb-dumbs come crawling to us, begging that we return?

No. That would never happen. They have as much pride as we do.

It’s an appealing concept, for sure. Individually, not one of us is substantial to society– that’s not a personal statement; no one person is that important. Any one of us could be cast into the flames with little cost to society. Yet we tend to feel like, as a group, we are critical. We’re right. am insignificant, but societies live or die based on what proportion of the few thousand people like me per generation get their ideas into implementation, and it’s only after the fact that one knows which side of the critical percentage a society is on. Atlas could shrug. Society could be brought to its knees if the most intelligent people developed a tribal identity, acted as a political bloc, were still ignored, and chose to secede. Science and the arts would stagnate, the economy would fall into decline, and society would be unable to correct for its own morale problems. The culture would crater, innovation would die, and whatever society endured such a “strike” would quickly fall to third-class status on the world stage.

That doesn’t mean we, the smart people who might threaten such a strike, would get whatever we want. Imagine trying to extort a masochist. “I’ll beat you up unless if you give me $100.” “You mean I can not give you $100 and get beaten up? For free? I’ll take that option; you’re so kind.”

I don’t mean to call society masochistic, because it isn’t so. Societies don’t make choices. People in them do, often with minimal or no concern with the upkeep of this edifice we call “civilization”. Now, the people at the top of ours (Corporatist America) are stupid, short-sighted, uncultured, and defective human beings. All of that is true. To assess them as weak because of this is inaccurate. I’m pretty sure that crocodiles don’t crack 25 on an IQ test, but I wouldn’t want to be in a physical fight with one. These people are ruthless and competitive and they’re very good at what they do– which is to acquire and hold position, even if it requires charming people (including people like us, much smarter than they are) to get it. They’d also rather reign in hell than serve in heaven. That’s why we’ll never be able to pull an Atlas Shrugged move against them. They care far more about their relative standing in society than its specific level of health. We’d be giving them exactly what they want: less competition to hold the high social status they currently have.

Also, I think that an Atlas Shrugged phenomenon is already happening in American society, with so little fanfare as to render it comically underwhelming. Smart people all over the country are underperforming, mostly not by choice, but because they are not getting opportunities to excel. Scientists spend an increasing amount of time applying for grants and lobbying their bosses for the autonomy that they had, implicitly, a generation ago. The quality of our arts has suffered substantially. Our political climate is disastrous and right-wing because a lot of intelligent people have just given up. Has the elite looked at the slow decline of the society and said, “Man, we really need to treat those smart people better, and hand our plum positions over to those who actually deserve them?” Nope. That has not happened; it would be absurd to think of it, as the current elite has too much pride. And if we scale that up from unintentional, situational underperformance to a full-fledged strike of the cognitive elite, we will be ignored for doing so. We won’t bring society to a calamitous break and get our due. We’ll see slow decay and the only people smart enough to make the connection between our strike and that degradation will be the strikers themselves. We already have a pervasively mediocre society and things still work– not well, but we haven’t seen catastrophic society-wide failures yet. It might get to that point, but it’ll be too late for the kind of action we might want.

In sum…

  • fantasy: the cognitive elite could go on “strike” and the existing elite (corporate upper class, tied together by social connections rather than anything related to excellence) would, after society fell to pieces, beg us to rejoin on our terms, inverting the power dynamic that currently exists between us and them.
  • reality: those parasitic fuckers don’t give a shit about the broad-based health of society. We’re not exactly a real competitive threat to them because they hold most of the power, but we do have some power and we’d just be making their lives easier if we withdrew from the world and gave that power up entirely.

As intellectuals, or at least as people who aspire to be such, we look at civil decline as tragic and painful. When we learn about expansive civilizations that fall into decadence and ruin, we tend to imagine it as a personal death that’s directly experienced, rather than a gradual historic change that few people notice in contrast to the day-to-day struggles of higher personal importance. So we often delude ourselves into thinking that “society” has its own will and makes “choices” according to its own interests, as opposed to the parochial interests of whatever idiots happen to be running it at the time. Thus, we believe that if “society” refuses to listen to our ideas and place us in appropriately high positions, we can withdraw as a bloc, render it ineffective, and impel it to “come crawling back” to us with better terms. We’re dead wrong in believing that this is a possibility. Yes, we can render it ineffective through underperformance (hell, it’s already arguably at that point, just based on the pervasive conformity and mediocrity that have declawed most of us) but this reorganization that we seek will never happen. We tend to overestimate the moral character– while underestimating the competitive capability (again: think crocodiles)– of our enemies. They are all about their own egos and they will gladly have society burn just to stay on top.

One concrete example of this is in software engineering, where the culture is mostly one of anti-intellectualism and mediocrity. Why is it this way? Given that an elite programmer is 10-100 times as effective as a mediocre code monkey, why do companies tailor their environments to the hiring en masse of unskilled “commodity” developers? Bad programmers are not cheap; they’re hilariously expensive. So what’s going on? The answer is that most managers don’t care about the good of the company. It’s their own egos they want to protect. A good programmer costs only 25 percent more than a mediocre one, but is 5 times as effective. Why not hire the good one, then? The answer is that the manager loses his real motivation for going to work: being the smartest guy in the room, and the unambiguous alpha male. Saving the company some money is not, to most managers, worth that price.

What we fail to realize, as the cognitive 1 percent, is that while society abstractly relies on us, the people running society think we’re huge pains in the ass and would be thrilled not to have to deal with us at all.

Do I believe that it’s time for the cognitive 1 percent to mobilize, and to take back our rightful control over society’s direction? Absolutely. In fact, I think it’s a moral responsibility, because the world is facing some problems (such as climate change) too complex for the existing elite to solve. The incapacity and mediocrity of our current corporate elite is literally an existential risk to humanity. We ought to assert ourselves, as a group, and start fixing the world. But the Atlas Shrugged model is the wrong way to go about that.

Fixing employment with consulting call options.

Here’s a radical thought that I’ve had. There are a lot of individual cases of people auctioning off percentages of their income in exchange for immediate payments, which they use to invest in education or career-improving but costly life changes like geographical moves. Someone might trade 10% of her lifetime income in exchange for $200,000 to attend college. This has a “gimmicky” feel to it as it’s set up now, and it’s something I’d be reluctant to do anything like that for the obvious reputation reasons (it seems desperate) but there’s a gem there. There’s a potential for true synergy, not only gambling or risk transfer. If a cash infusion leads a person to have better opportunities and a more successful career, then both sides win. There should be a way for individual people to engage in this sort of payment-out-of-future-prosperity that companies can easily use (it’s called finance). However, a percentage of income is too easy to scam. We need to index it to the value of that person’s time, and the best way to do that is to have the offered security represent a call option on that person’s time.

With the cash-for-percentage-of-income trade, the “Laffer curve” effect is a problem. There’s scam potential here. What if someone sells 10% of his lifetime work income for, say, $250,000, but actually finds ten buyers? Then he gets a $2.5 million infusion right away, which is enough money not to work. He also has zero incentive to work, so he won’t, and his counterparties get screwed because he has no work income. So this idea, on its own, isn’t going to go very far. The securities (shares in someone’s income) aren’t fungible, because the number of them that are outstanding has a major effect on their value.

Let’s take a different approach altogether. This one doesn’t involve a draw against someone’s income. It’s a call option on a person’s future work time. I intend it mainly for consultants and freelancers, but as the realities of the new economy push us all toward being more individualistic and entrepreneurial, it could be extended to something that applies to everyone. It’s not this gimmicky “X percent of future income” trade that doesn’t scale up to a real market (because once the trade stops novel, we can’t trust people not to sell incentive-affecting percentages of their income, and that problem naturally limits it). How does it work? Here’s a template for what such an agreement would look like.

  • Option entitles holder to T hours (typically 100; with blocks as small as 25 or as large as 2000) of seller’s time (on work that is legal) to be performed between dates S and E at a strike price of $K per hour. For a college student, typical values would be S = date of graduation and E = five years after graduation. For someone out of school, S might be set to the time of signing, and E to five years from that date. 
  • Seller must publish how many such options have been sold so buyers can properly evaluate the load (e.g. no one is allowed to sell 50,000 hours of time in the next 5 years, because that much work cannot be performed.) I would, in general, agree on a 2000-hour-per-year limit. Outstanding load is publicly available information and loads exceeding 1000 hours per year should be disclosed to future employers.
  • If the option is not exercised, then the no work is performed (but the writer still retains the value earned by selling it). If it is, seller receives an additional $K per hour. The option is exercised as a block (either all T hours or none) and buyer is responsible for travel and working costs.
  • These options are transferrable on the market. This is essential. Few people can assess their specific needs for consulting work, but it’s much easier to determine that a bright college students’ time will be worth $100/hour to someone in five years.

One thing I haven’t figured out yet is the specific scheduling policy beyond a “act in good faith” principle. If two option-holders exercise at the same time, who gets priority? How much commitment must the consultant deliver when exercise occurs (40 hours per week, making full-time employment impossible; or 10 as an upper limit, with the work then furnished over more calendar time)? Obviously, this needs to be something that the option-writer can control; buyers simply need to know what the terms are. The other issue is the ethics factor, which doesn’t apply to most of technology but would be an issue for a small class of companies. Most people would have no problem working for a meat distributor, but we’d want an escape hatch that prevents a vegan’s time from being sold to one, for example. There has to be some right to refuse work, but only based on a genuine ethical disagreement; not because a person has suddenly decided her time is worth 10x the strike price (which will almost always be lower than the predicted value of her time). The latter would defeat the point of the whole arrangement.

In spite of those problems, I think this idea can work. Why? Well, the truth is that this sort of call-option arrangement is already in place, although with an inefficient and unfair structure that leaves both sides unhappy. It’s employment.

How much is an employee’s time actually worth to the operation? Dirty secret: no one really knows. There are so many variables on each individual, each company, and each project that it’s really hard to tell. The market is opaque and extremely inefficient.  For example, I’d guess that a programmer at my level (1.7-1.9) is worth about $1000/hour as a short-term (< 20 hours) “take a look and share ideas” consultant, $250/hour as a freelance programmer, and perhaps $750,000 per year in the context of best-case full-time employment (wherein the package includes not only 2000 hours of work, but also responsibility and commitment) but well under the market salary ($100-250k, depending on location and industry) in a worst-case employment context. Almost no employer can predict where on the spectrum an employee will land between the “best-case” and “worse-case” levels of value delivery.

Employers know that for sociological reasons, a full-time employee’s observed value delivery is going to be closer to the worst-case than best-case employment potential. If you have interesting problems and a supportive environment, then a 1.5-level programmer is easily worth $300,000 per year, and a 1.8+ is worth almost a million. Most companies, though, can’t guarantee those conditions. Hostile managers and co-workers, or inappropriate projects, or just plain bad fit, all can easily shave an order of magnitude off of someone’s potential value. In fact, since doing that involves interacting with people and controlling how they treat each other, that’s seen as boundlessly expensive. If a manager has a long-standing reputation for “delivering” but is a hard-core asshole, is it worth it to unlock the $5 million per year released when he’s forced to treat his reports better, given that there is a chance of upsetting and losing him (and the “delivery” he brings, which he’s spent years making as opaque as possible)? The answer is probably yes, but the reason why he’s a manager is that he’s convinced high-level people not to take that risk. That’s how the guy got that job in the first place.

So what is employment, then? When people join a company, they’re selling their own personal financial risk. That stuff is toxic; no one wants it, so typically people offload it to the first buyer (employer) that comes along, until they’re comfortable enough to be selective (which, for most, doesn’t happen until middle age). When it comes to personal financial risk, corporations have the magic power to dissolve dogshit. They know it, and they demand favorable terms from an expected-value perspective. The employee would rather have a reliable mediocre income than a more volatile payment structure closer (in the long run) to their actual market value. So the company offers a salary somewhere around the 10th-percentile level of that person’s long-term value delivery. If the person works out well, it’s mutually beneficial. She enjoys her work, and renders to the company several times her salary in value. Since she’s happy, and since good work environments are goddamn rare and she’s not going to roll the dice and move to another (probably bad, since most are) corporate culture; a small annual raise and a bonus are enough to keep her where she is. What if she doesn’t work out? Well, she’s fired. Ultimately, then, corporate employment is a call option on the employee’s long-term ability to render value. The problem? Employee can opt out at any time. The option is contingent not merely on personal happiness, but on fulfillment. I’ll get back to that.

Why is my call-option structure better? There are a couple reasons. Obviously, everyone should have the fundamental right to opt-out of work they find objectionable. What I do want to discourage (because it would ruin the option market) is the person who refuses to work at a $75 strike because she becomes “a rockstar” and she’s now worth $1000/hour. That’s not fair to the option-holder; it’s not ethical. However, I feel like these opt-outs will be a lot rarer than job-hopping is. Why? First, everyone knows that job-hopping is a necessity in the modern economy. Almost no one gets respect, fair pay, or interesting work without creating an implicit bidding war between employers and prospective future opportunities. Sure, some manageosaurs who mistake their companies for nursing homes still enforce the stigma against job applicants with “too many jobs”, but people who weren’t born before the Fillmore administration have generally agreed that job hopping for economic reasons is an ethically OK thing to do. Two thousand hours of work per year is a gigantic commitment and exclusive of other opportunities, and almost no one would call it a career-long ethical commitment. The ethical framework (no job hopping, ever!) that enforces the call-option value (to employer) of employment is decades out of mode. It never made sense, and now it’s laughably obsolete. I would, however, say that a person who writes a call option on 100 hours of future work has an ethical responsibility to attend to it in good faith.

An equally important thought is that consulting is a generally superior arrangement to office-holding employment, except for its inability to deliver reliable income (which a robust options market could fix). Why? Well, people quit these monolithic 2000-hour-per-year office jobs all the time (often not by actually changing jobs, but by underperforming or even acting out, until they’re fired, and that takes a long time) because they don’t feel fulfilled. That’s different from being happy. A person can be happy (in the moment) doing 100 hours of boring work if he’s getting $20,000 for it. It’s not the work of “grunt work” that makes it intolerable for most people, but the social message. That’s why true consultants (not full-time contractors called such) are less likely to underperform or silently sabotage an effort when “assigned” grunt work; employees expect their careers to be nurtured in exchange for their poor conditions, while consultants get better conditions but harbor no such expectation.

On that psychology of work, I know people who can’t clean their own houses, not because the work is intolerable (it’s just mundane) but because they can’t stand the way they feel about themselves when doing such chores. However, a sufficient hourly rate will override that social message for almost anyone. How many people wouldn’t clean someone’s house, 100 hours per year, for 10 times their hourly wage? He won’t be fulfilled at such work at any price, but that’s different. It’s not hard to find someone who will be happy to perform work that most people find unpleasant. Consulting arrangements allow a price to be found. But with full-time position-holding employment, the zero/one fulfillment distinction is much harder to bring into being. People will clean, if paid to do it, but no one wants to be a cleaner forever.

The nice thing about consulting is that the middle ground between fulfillment and misery exists. You can go and do work for someone but you don’t have to be that person’s subordinate, which means that work that is neither miserable nor fulfilling (i.e. almost all of it) can be performed without severe hedonic penalty (i.e. you don’t hate that you do it). Because of modularity and the potential for multiple employment, you can refuse an undesirable project without threatening your reputation or unrelated income streams– something that doesn’t apply in regular employment, where refusing the paint that bike-shed that hideous green-brown color will have you judged as a uniform failure by your manager, even if you’re stellar on every other project. A consultant is a mercenary who does for pay, and only identifies with work if he chooses to. He sells some of his time, but not his identity. An employee, on the other hand, is forced into a monolithic, immodular 2000-hour-per-week commitment and forces identification with the work, if only because the obligation is such a massive block (yes, the image of intestinal exertion is intentional) that it dominates the person’s life, forcing identification either in submission (Stockholm Syndrome, corporate paternalism, and the long-term seething anger of dashed expectations in those for whom management doesn’t take the promised long-term interest in their careers) or in rebellion (yours truly).

So let me tie this all together rather than continuing what threatens to become a divergent rant on employment and alienation. An employee‘s main selling point is a call option written to her employer. If she matches well with the employer’s needs and its people, and if the employer continues to fulfill her desires for industrial fulfillment (which change more radically than the matter of what someone will be merely happy to do at a fair rate; the “good enough to be happy” set of work becoming broader with age, while fulfillment requirements go the other way and get steeper), and if the salary paid to her is kept within an acceptable margin (usually 20 to 40%) of her market value, she’ll deliver labor worth several times the strike price (her agreed-upon salary, plus marginal annual wage increases). Since there are a lot of ifs involved, the salary at which a company can justify employing her is several times less than her potential to render value: a mediocre salary that forces her into long-term wage-earning employment, when the value of her work at maximum potential would justify retirement after five to six years. That’s not unfair. In fact, it’s extremely fair, but an artifact of opacity and low information quality. 

Why is it like this? The truth is that the employer doesn’t participate in her long-tail upside, as it would with a genuine call option. In the worst cases, they do not exercise the option and stop employing her, but they pay transactional fees (warning time, severance, lawsuit risk, morale issues) associated with ending an employment relationship. In the mediocre cases (middling 80%) they collect some multiplier on her salary: the call option is exercised, and the company wins enough to generate a modest but uninspiring profit. In the very-good cases, she performs so well that it’s impossible to keep this from translating into macroscopic visibility and popping her market value. Since it’s not a real call option (she has no obligation at all to continue furnishing work) there is no way for the company to collect. An actual call option on some slice of her time would be superior, from the corporate perspective, because it insures them against the risk that her overperformance leads to total departure (i.e. finding another job).

How would we value such a call option? Let’s work with three model cases. One is Zach, an 18-year-old recently admitted to Stanford intending to major in computer science, with the obvious ability to complete such a course. He needs $200,000 to go to school. Let’s say that he puts the start date of the option at his rising-sophomore summer (internship) and the end date at 5 years past graduation. What’s a fair strike price? I would say that the strike price should be, in general, somewhere around 1/1500 of the person’s expected annual salary (under normal corporate employment) at the end of the exercise window. For Zach, that might be $80 per hour. The actual productive value of this time, at that point? (We can’t use a “stock price” for a Black-Scholes model, because the value of the underlying is affected by conditions including the cash infusion attendant to the sale; that’s why it’s synergistic.) I’d guess that it’s around $120, with a (multiplicative) standard deviation of 50%, which over 9 years equates to an annualized volatility of 16.7%. Using a risk-free rate of 2%, that gives the call option a Black-Scholes value of about $56. This means Zach needs to sell about 3570 hours worth of options to finance going to college. Assuming he can commit no more than 0.3 of a year for his four years of college, that’s 576 hours per year– not of free work, but of commitment to work at a potentially below-market “strike” price of $80 per hour. I think that’s a damn good deal for Zach, especially in comparison to student debt.

 

Alice is a 30-year-old programmer. She lives in Iowa City and has maxed out at an annual salary of $90,000 per year doing very senior-level work. The only way to move up is into management, which doesn’t appeal to her. She suspects that she could do a lot better in New York or San Francisco, but she can’t get jobs there because she doesn’t know anyone and resume-walls are broken– besides, how many VC-funded startups will hire a 30-year-old female making $90,000?–  and consulting (until this options market is built) is even more word-of-mouth/broken than regular employment. She knows that she’s good. She’d like to sell 7500 hours of work to the market over the next five years. Assume the option sale is enough to kick-start her career; then, her market value after five years is $250 per hour, but she sets her strike at $90. Since she’s older and her “volatility” (uncertainty in market value) is lower, let’s put her at 13% rather than Zach’s 16.7%. The fair value of her call options is $168 per hour, so she’s able to raise $1.26 million immediately: more than enough to finance her move to a new city.

Barbara is a 43-year-old stay-at-home mother whose youngest child (of five) reached six years of age. She’s no longer needed around the house, but has enough complexity in her life that full-time employment isn’t very tenable. However, she’s been intellectually active, designing websites for various local charities and organizations for a cut rate. She’s learned Python, taken a few courses on Coursera, and excelled. She wants to work on some hard programming problems, but no one will hire her because of her age and lack of “professional” experience. She decides to look for consulting work. She’s still green as a programmer, but could justify $100 per hour with access to the full market. She’s committing 1000 hours over one year, and she decides that $30/hour is the minimum hourly rate to motivate her, so she offers that as the strike. With volatility at 15% (although that’s almost irrelevant, given the low strike) she raises $71 on each option, and gets $71,000 immediately, with 1000 hours of work practically “locked in” due to the low strike price (at which anyone would retain her).

Cedar City High is a top suburban public high school in eastern Massachusetts. They’d like to have an elective course on technology entrepreneurship, and student demand is sufficient to justify two periods per day. Teaching time, including grading and preparation, will be 16 hours per week, times 40 weeks per year, for 640 hours. That’s not enough to justify a full-time teaching position, and it’d preferably be taught by someone with experience in the field. Dave is coming off yet-another startup, and has had some successes and failures but, right now, he’s decide that he wants to do something useful. He’s sick of this VC-funded, social-media nonsense. He’s not looking to get rich, but he needs to deliver some value to the community, and get paid enough for it to survive. He sets a minimum strike at $70 per hour, and he’s looking for about that 640 hours of work. Based on their assessments, Cedar City agrees to pay $15 for the options and exercise them, meaning they pay $85 per hour (or $54,400 per year, less than the cost of a full-time teacher) for the work.

Emily’s a 27-year-old investment banker who has decided that she hates the hours demanded by the industry and wants out. Her last performance review was mediocre, because the monotony of the work and the hours are starting to drain her. With her knowledge of finance and technology, she knows that she’ll be killing it in the future– if she can get out of her current career trap. However, five years of 80-hour work weeks have left her stressed-out and without a network. She’ll need a six-month break to travel, but FiDi rent (she can’t live elsewhere, given her demanding work schedule) has bled her dry and she has no savings. She realizes that the long-term five-years-out hourly value of her work– if she can get out of where she is now– is $300 per hour at median, with an annualized volatility of about 30% (she is stressed out). Unsure about her long-term career path, she offers a mere 500 hours (100 per year) with a five-year window. She sells the options at a $200/hour strike. The Black-Scholes value of them is $146 per hour, or $73,000 for the block. That gives her more than enough to finance her six months of travel, regain her normal emotional state, and find her next job.

So this is a good idea. That’s clear. What, pray tell, are we waiting for? As a generation, we need to build this damn thing.

Constructing Computation 1: Symbols, nonsymbols, and equality.

I want to do something different than my normal writing. I want to construct programming. What do I mean? I want to start from some basic principles and build up to the familiar. I won’t be proving anything, and  I’m not going to claim that what I’m building is necessarily the “right” way to do things– this is just for fun– but I want to explain how programming feels when one strips away the complexities and gets to the basics. This is a model of programming designed to work for anyone interested in learning the fundamentals. This will either be very interesting or loopy and divergent, and I will be editorially incompetent when it comes to determining which is the case.

To start, let’s create an object language (a world) where we have the following:

  • All objects (that is, all things in the world) are nonsymbols or symbols.

That’s easy enough. It’s tautological. It’s a symbol, or its not one. However, the distinction is important to us. Now, some computational models only have one category of “thing” (e.g. lambda calculus, in which everything is a function) and I’m starting with two. Why? With a symbol, we can “inspect” it and immediately know what it is. Nonsymbols are more like “black boxes”; you can observe their behavior, but you can’t know “everything” about an arbitrary nonsymbol.

Let’s start with some of the things we can do with symbols.

  • There is a function succ that takes a symbol x and maps it to a different x‘ (shorthand for succ[x].) There are no duplicates in the infinite set {xx’, x”, …}. We call that set Gen[x] and call x and y, incomparable iff xGen[y] and yGen[x]. We call a symbol x basic if there is no y such that y‘ = x. Applied to a nonsymbol, succ returns a special symbol, _error (which is basic, so there is no confusion between this return and a successful call to succ).
  • There is a symbol called 0 and, for any “English language word” (nonempty string of letters and underscores) “foo” there is a corresponding symbol _foo. These are all mutually incomparable and basic. The most useful of these will be _error.
    • We call 0′, 1; 0”, 2; and so on, so we have the natural numbers as a subset of our symbol language.
  • There is a meta-level function called eq that takes two symbols x and y and returns _true if they are identical and _false if they are not. It returns _error if either argument is a nonsymbol.

What makes symbols useful is the eq function, because it means that we can have a total knowledge about what a symbol is. We know that 0 is 0 is not 1. We know that _true”’ (call that _true3) is not _error or 5. We have a countable number of symbols, and assume that it’s one computation step to call eq on any two symbols (and get _true or _false). We also are going to allow another function to exist:

  • There is a function called comparable that takes two symbols x and y and returns _true if they are comparable, _false if they are not, and _error if either argument is a nonsymbol. This is assumed, likewise, to require only one computation step. The main purpose of this is to give us an natural? predicate, which we’ll use in code (later) to identify natural numbers. We also allow ourselves pred (which returns x when given x‘, and _error when given a basic symbol, like 0) and basic (which maps each symbol to the single basic symbol in the equivalence class defined by the comparability predicate) and consider those, as well, to be one computation step. 

Symbols are “data atoms”. We have something nice here. In a typical object-oriented language, a “thing” can have all sorts of weird properties– it might have changing state or unusual access behaviors, and it might have interacting local functions called methods. The space of things that can be objects is also extensible by the user. It can be a string, a file handle, or an abstract object used solely for perfect identity (i.e. comparing false to anything else, since it holds an address on the machine where it lives, that nothing else can.) There’s a lot less to symbols. The set of them that exists is limited (countably infinite) and pre-defined. You can’t do much with them. Strings, negative numbers, stateful entities, and other such things live at a higher level. We’ll get to those; they’re not symbols in this language.

Nonsymbols are not as well-behaved with regard to eq, the universal equality function. Given nonsymbols, it returns _error, giving us no information. Why? Because for nonsymbols, one cannot compare them for equality in the general case. The set of nonsymbols is not as well behaved. A nonsymbol can’t be observed for its “true” value directly. So let’s explain what we can do with nonsymbols:

  • There is a function called apply that takes two arguments. If its first argument is a symbol, it returns _error. However, the first argument will almost always be a nonsymbol. The second argument may be a symbol or nonsymbol, and likewise for the return. In this light, we can view nonsymbols as “like functions”; we liken a nonsymbol ns to the function mapping x to apply(nsx). The purpose of apply, then, is to give us the interpretation of nonsymbols.

We write a nonsymbol’s notation based on its behavior under apply, so one nonsymbol is {_ => _error}, which returns _error for any input. That’s not especially interesting. Another one (which I’ll call “CAT”) is {0 => 67, 1 => 65, 2 => 84, _ => _error}. What I’m writing here in the {} notation is in our natural language; not code. However, I want to make something clear. About nonsymbols that we create and “own”, we can know the full behavior. We know that {_ => 0} is constantly zero. We just can’t assume such “niceness” among arbitrary nonsymbols. I’ll also note that for every nonsymbol we can describe, there are infinitely many that we can’t describe.

Let’s look at a more pathological nonsymbol A:

A = {0 => A, _ => _error}

It’s self-referential. Is that allowed? There’s no reason it can’t be. Under apply with argument 0, it returns itself. This does mean that a naive “print nonsymbol” utility on A might fail, printing:

{0 => {0 => {0 => {0 => {0 => {0 => ...[infinite loop]...}}}}}}

There are also less-pathological nonsymbols with interesting behavior that can’t be written explicitly as this-to-that correspondences. For example, the succ function is realized by the nonsymbol:

{0 => 1, 1 => 2, 2 => 3, …; _error => _error‘, … ; _true => _true‘; …; $nsym => _error}

and eq by:

{0 => {0 => _true, $sym => _false, _ => _error}, 1 => {1 => _true, $sym => _false, _ => _error}, …, $nsym => {_ => _error}}

where $sym matches any symbol and $nsym matches any nonsymbol. 

However, there are nonsymbols without such easy-to-write patterns. For example, here’s one:

.{ [Symbol or nonsym. N : eq[_true, apply[natural?, apply[N, _size]]] does not return _true] => _error,
.  [Nonsymbol N : eq[_true, apply[natural?, apply[N, _size]]] =>
.    {$sym => _error, $nsym =>
{_size => k, 0 => apply[$nsym, apply[N, 0]], ..., k-1 => apply[$nsym, apply[N, k-1]]]}}}

That’s an extremely important nonsymbol. It’s called “VectorMap” and it works like this:

A := {_size => 3, 0 => 5, 1 => 7, 2 => 2, _ => _error}

F := {0 => 1, 1 => 2, ...} # that's succ (but VectorMap can take any nsym)

apply[apply[VectorMap, A], F] => {_size => 3, 0 => 6, 1 => 8, 2 => 3, _ => _error}

Given a nonsymbol understood to follow a certain set of rules (that _size => integer k and only symbols 0, 1, …, k – 1 are supported) it produces a new vector that is the image of the old one. What if those rules don’t apply? Then there are fields we end up not copying. It is just not possible to ask any questions about an arbitrary nonsymbol’s behavior over all inputs. If we created nonsymbol ns, we can verify that it’s compliant with invariants we created for it; however for an arbitrary one, we cannot answer questions that require us to query across all inputs. In other words, we’re not allowed to ask questions like:

  • Does nonsymbol ns return y for some argument x
  • Do nonsymbols ns1 and ns2 differ in returns for some argument x? (This is why nonsymbol equality comparisons can’t be made.)
  • Is nonsymbol ns constant (that is, is ns(x) identical to ns(y) for all x and y)?
  • Is there some finite number of k for which ns^k(0) — that is, ns applied to 0 k times– is a symbol?

To have an intuition for the theoretical reasons why such operations can’t be done on nonsymbols, consider (and just take on face value, for now) that some nonsymbols N exist of the form {[n is a natural number corresponding, in Godel encoding, to a proof of an undecidable statement] => 1, _ => 0}. In that case, its constancy is undecidable. That should explain the three impossibilities above; the fourth is an expression of the Halting Problem, which Turing proved undecidable. 

The problem with all such questions is that we don’t have an infinite amount of time, and those require an infinite amount of search. Symbols alone are countably infinite, and nonsymbols exist in a class so large that it cannot be called a set. (Nonsymbols, unlike mathematical sets, can contain themselves.) We are somewhat lucky in that the set of nonsymbols that we can describe in any formal language is still countable, reducing how much we have to worry about in the real world; still, these questions remain unanswerable (Godel, Church, Turing) even over that (countable) set of nonsymbols we can build.

Because nonsymbols are unruly in the general case, we find ourselves wanting to define contexts (which will later be formalized better as types; contexts here are informal English-language entities we use to understand certain kinds of nonsymbols) which are principled processes of calling apply on nonsymbols (making observations) to recover all information considered relevant. For example, above we used the Vector context:

Observe _size. If it's not a natural number, then it's not part of the Vector context. If it is some natural number k, then observe 0, 1, ..., k - 1.

This gives us, additionally, an equality definition for Vector. If all observations are identical according to this process, then the Vectors are considered equal. This means that these:

{_size => 1, 0 => 18} (from now on, assume _ => _error unless otherwise specified.)

and

{_size => 1, 0 => 18, _metadata => _blow_up_world}

are equal in the Vector context, even though the latter has gnarly _metadata. In some other context, they might not be equal.

For a nonsymbol to be “not part of the Vector context” means that one can’t interpret it as a Vector. Nothing prevents us from doing so computationally, because we’ve created a language in which even garbage function calls, such as apply with its first argument a symbol, return something– specifically, _error. For example, using vector equality semantics on {0 => 1, 1 => _empty} and {0 => 2, 1 => _empty} leaves us with nonsensical instructions, because observing _size yields _error, and “0, …, _error – 1″ doesn’t make any sense. Computationally, a vector-equality function would be expected to return _error in such a case. If the machinery responsible for looping over “0, …, k-1″ weren’t guarded against erroneous values, one can imagine a naive interpretation that falls into an infinite loop, because succ^k(0) never reaches _error.

Here’s another context called LL for now.

The symbol _empty is in LL, and no other symbols are. For a nonsymbol N, observe N(0). If N(1) = _empty, terminate. Otherwise, observe N(1) in the LL context.

So, here’s something in the LL context:

{0 => 67, 1 => {0 => 65, 1 => {0 => 84, 1 => _empty}}}

What we have, there, is a lazy linked list corresponding to the word “CAT” (in Ascii).

In practice, we’ll also want to keep a preferred context living with our data, so we might see nonsymbols that look like this:

{_type => _vector, _size => 64, 0 => 83, …, 63 => 197}

That gives us an Object1 context for equality:

Two symbols x and y are equal if eq[x, y]. A nonsymbol and symbol are never equal. For two nonsymbols, observe _type for each. If they are both equal symbols, "look up" the corresponding context (e.g. Vector for _vector) and apply it. If they are differing symbols, then they are unequal. If this lookup fails (i.e. their _type is unknown) or either has a nonsymbol observation _type, then they don't live in the Object context and this comparison fails.

By “look up”, I’m assuming that we have access to a “golden source” of interpretations for each _type observation. In the context of distributed systems, that turns out to be a terrible assumption. But it will work for us for now. Even still, the above context is not always satisfactory. Sometimes, we want cross-type equality, e.g. we want

{_type => _list, 0 => 67, 1 => {_type => _list, 0 => 65, 1 => {_type => _list, 0 => 84, 1 => _empty}}}

and

{_type => _vector, _size => 3, 0 => 67, 1 => 65, 2 => 84}

to be treated as equal in our Object context, since they both represent the word “CAT” (in Ascii). Well? That gets gnarly quick. That if our Object context admits N^2 different equality contexts because we need one for each pair of possible _type values. It gets even worse if we allow users to extend the definition of Object, making its list of supported types potentially infinite.

We treat nonsymbols as lazy, which means that their return values are provided on an as-needed basis. They don’t exist in a “completed” form. This is important because many of them contain an infinite amount of data inside them. For example, one has already seen how basic linked lists behave:

{0 => 212, 1 => {0 => 867, 1 => {0 => 5309, 1 => _empty}}}

but then there are also lazy streams that have infinite depth, such as this one:

{0 => 2, 1 => {0 => 3, 1 => {0 => 5, 1 => {0 => 7, 1 => {0 => 11, 1 => {0 => 13, 1 => {…}}}}}}}

which contains the entire set of prime numbers.

This laziness has a lot of benefits. It gives us one of the most powerful nonsymbols out there, which I’m calling Branch:

{[nonsymbol N s.t. N(0) is _true] => {T => {_ => T(0)}}, 
  [nonsymbol N s.t. N(0) is _false] => {_ => {F => F(0)}}}

A thunk is a nonsymbol whose return value is constant, e.g. Thunk[x] = {_ => x}. Since it’s irrelevant argument what is used to “pierce” it, we can use 0 by convention (or _unit if we wish to suggest, more strongly, a never-used input value). The design principle of the Branch nonsymbol is to take three thunks. The first (condition) is observed at 0 (that is, evaluated) no matter what. If it returns _true, we evaluate the second thunk and never the third. If it’s _false, then we ignore the second and evaluate the third.

We use apply3[x, y, z, w] as shorthand for apply[apply[apply[x, y], z], w] and we note that:

apply3[Branch, Thunk[_true], Thunk[42], Thunk[_blow_up_world]]

never, in fact, blows up the world. It returns 42. This gives us conditional execution, and it will be used to realize one of the most important language primitives, which is the if statement.

I’m going to talk about one other context, which will seem radically disconnected with how we usually think of the concept. I’m going to call it “Array” even though it seems nothing like an actual computational array (i.e. a block of contiguous memory):

No nonsymbols are in the Array context. Symbols not comparable to zero (i.e. not natural numbers) are not in the Array context. If it is an Array, call that natural number k. Observe the largest integer m such that m^2 <= k. If k - m^2 >= m, then it's not in the Array context. Otherwise, observe k - m^2.

In other words, only a subset of the natural numbers are in this context. The first observation gives us a maximal value, and the second gives us a value corresponding to the data it contains.

For example, 281474976842501 is an array. That number is equal to 2^48 + 2 * 2^16 + 3 * 2^8 + 5; so m = 2^24 and our second observation is 131845, which we interpret as the 3-byte array [2, 3, 5].

We now have an adjective we can apply over contexts. A context is Arrayable if:

  1. there is some principled way of encoding all things that live within it into an Array, and
  2. there is a fixed set of observations, after making which; for all things that live within it, we will have a finite upper bound on the integer value of the array computed.
  3. each semantically different value will map to a single Array, distinct from any other, but any values that are equal within that context will map to the same Array.

Vector is not Arrayable, but here are two examples of Arrayable contexts, BigInt and AsciiString:

BigInt: Observe _upper. Observe _data. If _data exceeds _upper, the nonsymbol does not live in the BigInt context. This is Arrayable as n(_upper)^2 + n(_data).

AsciiString: Observe _size and, if it is a natural number, call it k. (Otherwise, the nonsymbol is not an AsciiString.) Observe 0, ..., k-1. If all observations are natural numbers from 0 to 127 inclusive, you have an AsciiString. Otherwise, the nonsymbol does not live in this context. If it does, this is Arrayable as: 128^0 * n(0) + 128^1 * n(1) + ... + 128^(k-1) * n(k - 1) + (128^k)^2.

Here is an AsciiString nonsymbol for “CAT”:

{_size => 3, 0 => 67, 1 => 65, 2 => 84}

Its corresponding Array (integer) is 2^42 + 84 * 2^14 + 65 * 2^7 + 67 = 4398047895747.

Whither code?

So far, we’ve dealt only with data. I haven’t gotten yet to code: what it is, and why we like it. Code is what we call a nonsymbol within some Arrayable context (called a language) that allows us to produces a symbol or nonsymbol, in a predictable and useful way, from it. For example, here’s a piece of code, expressed as a nonsymbol.

{_size => 15, 0 => 40, 1 => 102, 2 => 110, 3 => 32, 4 => 120, 5 => 32, 6 => 40, 7 => 115, 8 => 117, 9 => 99, 10 => 99, 11 => 32, 12 => 120, 13 => 41, 14 => 41}

It can better be expressed as an Ascii string:

"(fn x (succ x))"

Our language might give us the tools to transmit this into the nonsymbol we want: {x => x’}.

We can’t trust nonsymbols very far. The space in which they live is too big. Let’s just talk about a simple (but infinitely supported) nonsymbol called Add:

{0 => {0 => 0, 1 => 1, 2 => 2, …}, 1 => {0 => 1, 1 => 2, 2 => 3, …}, 2 => {0 => 2, 1 => 3, 2 => 4, …}}

We can’t afford to realize (or “complete”) this nonsymbol in its entirety, but that doesn’t mean we can’t use it, because we’re only going to need a small number of its infinite cases in a real-world running program. We need some way of specifying what it is, without having to allocate infinite memory (which no machine has) for the table. We end up wanting to be able to write something like this (in pseudocode):

$Use name #add for (fn x -> (if (eq x 0) then (fn y -> y) else (fn y -> (succ (#add (pred x) y)))))

When we execute this, we see its operational semantics.

(#add 2 3) is ((#add 2) 3)
. => (((fn x -> (if (eq x 0) ...)) 2) 3)
. => (((if (eq 2 3) ...)) 3)
. => ((if false ... (fn y -> (succ (#add (pred 2) y))) 3)
. => ((succ (#add (pred 2) y)) 3)
. => (succ (#add (pred 2) 3))
. => (succ (#add 1 3))
. => ... => (succ (succ (#add 0 3)))
. => ... => (succ (succ ((fn y -> y) 3)
. => ... => 5

Of course, this version of addition really succs when it comes to performance. (It’s not even tail-recursive!) Why are we performing so much succage just to add two natural numbers? Well, this is a “least assumptions” model and it doesn’t assume, axiomatically, that one knows how to add. We derive that from succ (which we do assume). In reality, you’d almost certainly want to have a primitive called add or + that performs the addition in a more performant way. Invoking it would perform an apply on the appropriate nonsymbol, e.g.:

(+ 5 7) => apply2[{nonsymbol for addition}, 5, 7]

How does this work in the world of real computation? Well, two things are different. First of all, we actually have a lot of those primitives that we didn’t assume to exist. For example, we have addition and multiplication provided by the chip: not the entire nonsymbol, but a process that executes it over a large number of values (e.g. 64-bit integers). We don’t have to write, in general, arithmetic operations in terms of succ. Again, we could, but it’d perform extremely poorly.

On the other hand, there’s a false assumption above, which is that that eq takes constant time (one “computation step”). On an a finite set of symbols, it can be very fast but, in the real world, this doesn’t hold true. No physical object that we know of can differentiate an infinite number of states and communicate back to us, reliably, which state is observed. That’s why finite sets of symbols and Arrays become important. 

What we really have in the state of a computer is a nonsymbol that has the capability to represent a finite (but very large!) set of symbols. We can think of it as a nonsymbol for which there is some natural number N that returns _error whenever we throw anything but an integer between 0 and N – 1 at it, and returns 0 or 1 when applied to such an integer. Since that state is Arrayable (if we know N) we can imagine it as a single integer or (more usefully) as an array of bits or bytes.

Summary

Again, my purpose here isn’t to do groundbreaking computer science, and that’s not what I’m doing. None of these ideas are original to me, and the only thing I’m trying to accomplish is to create a world that presents these concepts in an accessible but still semi-rigorous way. So here’s what we have so far.

  • We have a universe of symbols and nonsymbols. Symbols allow us to compare for equality (eq) but for a nonsymbol we can never assess its full identity; we can only apply it to various arguments to compute a “return” for each. There’s a succ function on symbols that produces a new one. 
  • We have countably infinitely many symbols and a special symbol called 0; the set {0, succ[0], succ[succ[0]], …} is the natural numbers.
  • Nonsymbols can take symbols or nonsymbols as arguments, and return symbols or nonsymbols. They can be thought-of as akin to functions, but they actually operate on the whole class (nonsymbols are not a set).
  • We have a currently-informal notion of context that allows us to make observations (applications to arguments) of a nonsymbol in a principled way. Some nonsymbols don’t “live in” that context, meaning that the observations are garbage, but many do.These allow us to interpret certain classes of nonsymbols in a better light. For example, we can’t compare nonsymbols for equality ever, but we can compare them for equality as Vectors (i.e. according to observations made in the Vector context).
  • We have a special context called Array that represents a subset of natural numbers. In that context, we observe first an upper bound, and then a natural number data that is less than that.
  • A context is Arrayable if there is a principled way of converting any nonsymbol living in it into an Array, and back, with no context-relevant information lost. (Behavior of that nonsymbol that lives outside the context might, however, be lost.)
  • Languages are Arrayable contexts in which a “string-like” nonsymbol is converted into a symbol or nonsymbol “described” by it. In practice, we think of this as the conversion of a string (file contents) into a function (stateless program).

I don’t know where this is going, or if it’s interesting or useful to anyone, but this is I we have now and I hope I’ve made at least a few concepts clearer (or more attractive, or more interesting).

Why You Suck

This is a post about Why You Suck. Since this is the rhetorical “you” that refers to a least-assumptions unknown person, it’s also about me and Why I Suck. Or, perhaps I should say that it’s about why all of us tend toward Suck sometimes. What do I mean by Suck? I mean that we’re so terrified of failure and embarrassment that it pushes to mediocrity and, at the extreme, entrenched anti-intellectualism.

Take fine arts, for one topic, because it’s one that draws out a lot of peoples’ insecurities. It’s actually quite hard to get a sophisticated understanding of what makes, say, opera good or bad. I don’t have it. I enjoy opera, but I don’t have the palette to have an informed opinion on the quality of an individual piece. I like it, or I don’t, but there’s nothing I have in terms of sophistication or exposure that gives me an elevated skill at critique. If I pretend to have a deep knowledge of opera, then I’ll sound like a fool. Now, you might be saying: so what? What’s wrong with having a mediocre exposure to opera? Why would anyone be insecure about that? It’s not a sign of a lack of talent. I just haven’t specialized in the appreciation of it. Well, that’s because it’s not a defining part of who I am. However, there are a lot of people who realize how hard it is to become fluent in something, and therefore get discouraged prematurely. It bothers them, because they really want to get good. When it doesn’t happen quickly, a lot of people go the other way and say “it’s esoteric and not worth knowing”. I say, bullshit. You don’t know it, and I don’t know it, but that doesn’t make it “not worth knowing”.

Let’s talk about foreign languages, another place where this attitude emerges. The fear there isn’t that learning a new language is hard (without exposure, it is; with exposure, most people can do it, at any age). It’s about embarrassment. No one wants to look like an idiot by getting words wrong. People would rather use the language they know best. That’s reasonable, but some people take it a step further and decide that some topic in which they lack knowledge just isn’t important. I mean, how much does opera do for us in our daily lives? For fine arts, that’s just passive anti-intellectualism. When it comes to foreign languages and cultures, which have every bit as much validity as our own but are often rejected as “unimportant” by the insecure, it’s being an asshole.

We all end up doing this. We find something we’re not good at and the first thing we want to do is find a reason why it’s not important. That’s why intellectually insecure politicians cut funding for public universities; they hate those “ivory tower” academics who make them feel stupid. It takes a certain awareness to look at the world and say, “this place is so big that, for everything I learn, there will be a billion things worth knowing, that I never will, because there isn’t the time to get good at everything”.

So, many people go off in the opposite direction. They conclude that the things they’ll never get good at (often by choice) are just useless and retrench in their anti-intellectualism. This is especially severe in the software industry (don’t even get me started on anti-intellectualism) where, in many companies, taking an interest in doing things right as opposed to the empty-suit-friendly bastardization of “object-oriented programming” that has been in corporate practice for the past 20 years will often make you a weirdo, a pariah, one who cares too much.

By the way, d’ya want to know why so many of us software engineers have shitty jobs that make us unhappy? Well, we don’t have a strong professional identity. Doctors report to other doctors. Lawyers report to other lawyers (by law) unless directly to the corporate board. Engineers (actual engineers, not “software engineers”) report to engineers. We, on the other hand, report to professional managers who think what we do is “detail-oriented grunt work”. To add to the insult, they often think they could do our jobs, because they’re smarter than us (otherwise, we’d have their jobs). Why is this? Why are we, as software engineers, in such a degraded state? Perhaps it is because we, as a tribe, are anti-intellectual. If we don’t know what functional-reactive programming is, many of us are ready to conclude that it’s “weird” and impractical and “not worth knowing”. (Oh, and I’ve seen hard-core functional programmers take the same attitude toward C and low-level coding and it’s equally ridiculous.) Don’t get me wrong; there are a large number of individual exceptions to that. I enjoy programming– and I don’t identify fully as a programmer; I’m only a 96ish percentile programmer but I’m a fucking murderous problem-solver– and I care so much about keeping up my programming skills because I’m not anti-intellectual. And because it’s fucking cool. I had a boss (a very smart guy, but clueless on technology) once who said he refused to learn programming because he thought it’d  kill his creativity. That’s that same anti-intellectualism on the opposing side. Perhaps it’s karma. Perhaps the anti-intellectualism that characterizes the average member of our tribe (defined loosely to include all professional programmers, the average of us being terrible not for a lack of talent, but mediocre drive) makes us a perfect karmic match for that other anti-intellectual tribe: the executives and “big picture” moneymen who boss us around.

Okay, I’m going to get to the source of all this devastating mediocrity.

“A million dollars isn’t cool. You know what’s cool? Social status.”

Yeah, I know. That sounds ridiculous, no? I’ll explain it.

Some will recognize the quote from The Social Network, in which Justin Timberlake portrays Sean Parker as an ambitious uber-douche who says the above quote, but with “a billion dollars” instead of “social status”. I re-appropriated it, because I’ve wanted for a long time to understand why we as humans are so incompetent at, well, being human, and doing so required me to understand human status hierarchies. So I douche-ified the un-Parker-like quote even further. Wanting to be a billionaire is pretty douchey, but why would one want so much money? It’s social status, the driving ambition of the douchebag (and a lesser ambition, alas, for all of us).

Take unemployment. Why is it that during a three- to six-month stretch of joblessness, the average person (with men being much more sensitive to this effect) will do less housework and perform more poorly on side projects than when that person has a full-time job? Most jobs don’t add much to a person’s life. A monolithic and inflexible obligation, usually toward ingrates, that by explicit design makes diversification of labor investment almost impossible, is hard to call a good thing for a person. Society has actually had to work at it to make the alternative (joblessness) so embarrassing that it’s worse for the vast majority of people. The social status penalty of not having a job must be so severe that people refuse to tolerate joblessness. One boss fires ‘em; they look for another. However, in the long term, this exacerbates the real underlying problem, which is that they’re so job-dependent that they’ve forgotten how to serve others (in trade, and often for personal benefit) in any other context. Anyway, my point here is that the embarrassing nature of joblessness has been made so severe that it’s worse for a typical person’s well-being (and out-of-work completion) than spending 8-10 hours in an office.

Our minds and our bodies are constantly taking signals as to our social standing, and reacting in ways we often can’t control. I’ve often believed that, at the least, mild depression emerged as an adaptive response to survival of transient low social status. Of course, the disease depression is something different: a pathology of that mechanism, which might trigger for no reason. I only mean to suggest that the machinery might be there for an evolutionary purpose. That also, to be, explains why exercise is so effective in treating mild depression. It tells the body that the person is of high social status (invited on the hunt) and causes the brain to perk up a little bit.

People often say, “I don’t give a fuck what other people think about me”. Bullshit. If that were true, you’d never say it– almost by definition, you wouldn’t, because it’s something people say to seem badass. Unfortunately, it misses the point. First, it’s dishonest. We’re biologically programmed to care what others think about us. To be ashamed of it is to be ashamed of our own humanity. Second, there’s good badass and bad badass and insane badass. Insane badasses don’t care what others think of them because they suffer frank mental illness that overrides even the most blunt social signals. Bad badasses generally quite a bit about their own social status; they just don’t have much empathy and therefore only care about others’ opinions when it interferes with them getting what they want. Good badasses, on the  other hand, are empathetic but they are also committed to virtue even in the face of unpopularity. All three types have a claim to not caring (as much as normal people do) what others think, but only one of those three is desirable.

Why do people make such a boast about not caring what others think? That’s because we abstractly admire that sort of emotional independence. In practice, it can go either way whether that’s a good trait. If you really don’t care at all about how your actions affect others, then you’re an asshole. Now, I’m generally on-board with a certain virtuous investment in actions over results, for sure. I also take a certain pride (not always to my benefit) in virtuous actions that lead to socially adverse results– because I am morally and intellectually superior to, at least, the dominant forces in our society (I can’t adequately compare myself either way to “the average person” because I don’t know him, but I am demonstrably superior to those running this world and that’s an obvious fact of my life) and I revel in it. I also still think that if you don’t care at all to pick up signals about how your actions are really affecting the world, then you’re just being a dick. You should care– just a little bit, but not zero– what other people think of you, especially as pertains to your effect on others. If you are helping people and suffering social adversity, you might be virtuous and that adversity might exist because the people who fling it at you are the epitome of vice and parasitism. On the other hand, social adversity might also be a sign that you’re doing things wrong. You should at least listen to the signal. If you understand its source and recognize that source as not worth caring about, then fine. Not listening makes you a jerk, however. 

So… I hope I’ve shot down the “I don’t care what others think” defense. I’m more badass than most people who say this and I care what other people think about me.

Now, I want to go back to “You know what’s cool?” No one can visualize a billion dollars. People with that much wealth never even see the pile of cash, except for Walter White. That billion-dollar net worth is just a linear combination of a bunch of other numbers about them strewn across the world. What they own, by entity and percentage. Who owes them money. To whom they owe money. That is a kind of social status, but a stable and legally recognized one called “ownership”. So there we are. All of economics is predicated on the idea that people want resources and money; and one of the biggest reasons, I would argue, that they want it is that it’s psychological: they want the social status. If that seems unduly negative, it shouldn’t be. Social status is the only reason I have a computer to write this post on, or a cup of coffee to drink in the morning. I’m able, because I speak certain natural and social languages and have certain skills (that I acquired by being born into the right country family, distinguishing myself early in academics, etc.) to get people to pay me for services that others could perform more cheaply (most of those cheap competitors wouldn’t do it as well, but neither would most of the higher-status, better-paid ones). Gift economies don’t scale. We can interact with the market only if we can prove by certificate (e.g. money) that someone thinks we have some status or value (making us worthy of employment or ownership of an asset) and so all of us need some kind of status, even if it’s just a little bit. It’s horrible that the world works that way, and that a person of merit might fail due to extreme lows of social status, but it’s how things work right now.

Now, a billion dollars isn’t cool. Even the disgusting rich douchebags don’t actually sleep (to quote Don Draper) on “a bed made of money”. Money is paper that would disgust us (because of all the places it has been) were it not for a certain social value. Rather, it’s the social elevation that drives people. “Money” is not the root of all evil; social status is. That’s what most people, and especially douchebags, find “cool”. Green cotton paper, even at the 10-ton level a billion dollars would require, has little to do with it.

In fact, we can tie social status to all of the seven deadly sins:

  • wrath: people use threatening emotions, postures, and violent actions to defend social status. 
  • envy: people covet social status and delight in the destruction of higher-status individuals.
  • sloth: unconditional “passive” social status (i.e. that doesn’t require work) is always preferable over kinds that are contingent on productive activity, which one might lose the ability to perform at an acceptable exchange rate (health problems, disinterest, superior competition).
  • lust: one of the primary reasons for high-status people to seek even higher levels of status (to the detriment of social and mental health) is the desire to indulge in sexual perversion.
  • greed: this one’s obvious. Most of the assets that inspire greed confer social status. People are rarely greedy toward things that don’t.
  • pride: also obvious. People create an outsized self-image out of a desire for deserved high status, then expect the world to conform to their grandiose self-perceptions.
  • gluttony: defined literally, this is an odd-man-out in modern times because obesity lowers one’s social status, but if we extend the metaphor to material overindulgence, we see it as a form of posturing. Conspicuous consumption enables a person to prove high social status, thus maintaining it.

Of course, all of those sins are also sources of Suck– yours and mine. They blind us, make us do short-sighted and stupid things, and generally leave us bereft of moral courage, curiosity, creativity, and virtue. It turns out that social status is a driving force behind what makes humans horrible. The concern for social status seems, in many people, to be limitless and only more productive of vice and evil as they gain more of it. Satiation in most commodities sets in, and people stop being horrible. It’s rare to see two people fight over a piece of bread in an upscale restaurant, because average Americans are rich enough not to turn to vice over food. With social status, that’s not the case for many people. They don’t reach satiation and revert to virtue, but get worse as they climb and (a) satiation proves elusive, while (b) the competition for status becomes fiercer as they climb the ladder. They go beyond Suck and into outright Vice. Yeshua of Nazareth was right on: you cannot serve God and Mammon.

But back to Suck…

Vice is an interesting topic in its own right, but I’m here to talk about Suck. You and I both Suck. I don’t think I’m a vicious or bad person, and I doubt most of my readers are. However, we do things that are counterproductive. We avoid learning new technologies because “I might not get any good at it, and just embarrass myself.” We might do the right thing despite threat of social unpopularity, but it’s really hard and we spend so many clock cycles convincing ourselves that we’re doing the right thing that it takes the edge off of us. It’s almost impossible to excel at anything in this world. Why? Well, excellence is risky.

Something I read on Hacker News really impressed me. It explained a lot. I think it resonates with all the top-10% programmers out there who are constantly pushing themselves (often despite economic incentives, because there is a point where being a better programmer hurts your job security) to be better. Here it is (link):

No. Burnout is caused when you repeatedly make large amounts of sacrifice and or effort into high-risk problems that fail. It’s the result of a negative prediction error in the nucleus accumbens. You effectively condition your brain to associate work with failure.

Now, on the surface this is true. Failure is extremely demoralizing. However, as I think about it, it’s not project failure itself that brings us down. It’s annoying. It’s a learning experience that doesn’t go the way we hoped. In the discovery process, it usually means we discovered a way not to do things (which has lower information-theoretic value) than a way to do them. However, failure itself I do not think is the major problem. I think people who are used to doing hard things can learn to accept it in stride.

I am constantly trying hard things and attacking high-risk problems. I took difficult proof-based math exams in high school and college where very few people could solve even half of the problems in the allotted time. I’ve tried a great many things with sub-50-percent chances of success, and had some hits… and a lot of misses. Failure is difficult. It’s a struggle. It’s already hard without the world conspiring to make it harder. But it’s the social status damage that comes out of failure that really stops a person. That’s the force that pushes people toward self-protecting careerist mediocrity as they get older. Yes, it’s learned helplessness, but it’s not mere project failure that induces the neurological penalty. A more supportive, R&D-like, environment (as opposed to the mean-spirited caprice of contemporary private-sector social climbing) could mitigate that. (I worked at a think tank once where the unofficial motto was “bad ideas are good; good ideas are great” and that supportiveness motivated people to do some outright excellent work.) Failure isn’t what ruins people. It’s the dogshit heaped on a person by society after a project failure that has that effect. After a while, people get tired of the (transient, but extremely stressful) low social status that follows a failed project, and give up on high-risk excellence.

Going forward

Awareness of Suck and its causes is the first step toward overcoming it. Denying that one experiences it personally is not generally helpful, because almost everyone Sucks to some degree, and there are powerful neurological and social reasons for that. Admitting vulnerability to it is like admitting physical inferiority to polar bears; none should be ashamed of it, it’s just how nature made us.

Why are people so mediocre, both in moral and creative terms? We now have the tools to answer that question. We know where Suck comes from. And we can work, a little bit each day, on making ourselves not Suck. 

More important, however, is finding a way not to induce Suck in other people. I’m going to pull something else from Hacker News that I like a lot, this time from the Hacker School‘s social rules. I’m not going to post all of them; let me just give a flavor:

The first of these rules is no feigning surprise. This means you shouldn’t act surprised when someone says he doesn’t know something. This applies to both technical things (“What?! I can’t believe you don’t know what the stack is!”) and non-technical things (“You don’t know who RMS is?!”).

I’ll admit that I’m guilty of this, too. My eyes glaze over when another programmer mentions Visitor or Factory design patterns and doesn’t seem to be trolling me. Maybe I’m slightly better, in that Visitor usage is a positive symptom of idiocy while not knowing something is a negative symptom, and we all have an infinitude of negative idiocy-signals (because there are infinitely many things we don’t know and, arguably, should). Or maybe not. Maybe I should stop being a dick and assume (despite what Bayes would say) that the programmer who says “Visitor pattern” with a straight face is a talented person who just never learned better.

Other behaviors explicitly discouraged are cosmetic correction (over-cutting someone’s essentially correct statement with an irrelevantly more correct one) and backseat driving. This is good. Hacker School is making an admirable attempt to clear out the social processes that sometimes make intermediate-level programmers embarrassed by the gaps in their knowledge. thus risk-averse. That’s a great thing, because after a while, people who are made to feel insecure about gaps in knowledge tend to fly the other way, and that produces the “that topic isn’t important” anti-intellectualism.

Hacker School’s getting it right. If people aren’t afraid for their own social status, they’re more inclined to take risks, grow faster, and excel. This is an ideology that gets a lot of mouth-honor, but few people follow it.

Even VC-istan claims to “embrace” failure, but the reality is that “fail fast” is often an excuse for impulsive firing (without severance, typically) and “lean startup” often means “we want you to work 90 hours a week and be your own assistant instead of working 60 and hiring one”. The reality is that VC-istan’s collusive reputation economy allows it to be anything but tolerant of business failure, even the good-faith kind.

The only work culture in which project failure is tolerated is the R&D one. Most companies these days have mean-spirited, fast-firing cultures where a project failure results in someone getting fired, demoted, or punished for it. Sometimes there’s no one at fault and someone just gets randomly hit. Or, when there is someone at fault, it might not be that person who suffers (it usually isn’t, as bad managers are great at shifting blame). The result of the mean-spirited, fast-firing, performance-reviews-with-teeth structure of the modern corporate workplace is that competent people rarely invest themselves in efforts that might fail, even if successes will be enormously beneficial. Instead, they strive to put themselves on highly visible projects, but those with enough momentum that they are extremely unlikely to fail. The result of this is that project genesis has almost no ambition in it, and most of the best people aren’t coming up with ideas anyway, but looking to draft on someone else’s. Of course, by the time a project shows sure visible victory, so many people are aware of it that the competition to be “in on it” is cutthroat. (Closed-allocation companies aren’t about doing work, but about holding positions and being “on” important projects.)

If you have the open-allocation, high-autonomy R&D culture where good-faith failure is treated as a learning experience and people can move on gracefully, you get a sharing of knowledge because people are no longer pressed to hide failures. If you have anything else in a white-collar environment, however, you’re likely to end up with a blame-shifting culture. That’s where Suck really starts to assert itself, and take control.