The U.S. upper class: Soviet blatnoys in capitalist drag.

One thing quickly learned when studying tyranny (and lesser , more gradual, failures of states and societies such as observed in the contemporary United States) is that the ideological leanings of tyrants are largely superficial. Those are stances taken to win popular support, not sincere moral positions. Beneath the veneer, tyrants are essentially the same, whether fascist, communist, religious, or centrist in nature. Supposedly “right-wing” fascists and Nazis would readily deploy “socialist” innovations such as large public works projects and social welfare programs if it kept society stable in a way they preferred, while the supposedly “communist” elites in the Soviet Union and China were self-protecting, deeply anti-populist, and brutal– not egalitarian or sincerely socialist in the least. The U.S. upper class is a different beast from these and, thus far, less malevolent than the communist or fascist elites (although if they are unchecked, this will change). It probably shares the most in common with the French aristocracy of the late 18th-century, being slightly right-of-center and half-hearted in its authoritarianism, but deeply negligent and self-indulgent. For a more recent comparison, I’m going to point out an obvious and increasing similarity between the “boardroom elite” (of individuals who receive high-positions in established corporations despite no evidence of high talent or hard work) and an unlikely companion: the elite of the Soviet Union.

Consider the Soviet Union. Did political and economic elites disappear when “business” was made illegal? No, not at all. Did the failings of large human organizations suddenly have less of a pernicious effect on human life? No; the opposite occurred. What was outlawed, effectively, was not the corporation (corporate power existed in the government) but small-scale entrepreneurship– a necessary social function. Certainly, elitism and favoritism didn’t go away. Instead, money (which was subject to tight controls) faded in importance in favor of blat, an intangible social commodity describing social connection as well as the peddling of influence and favors. With the money economy hamstrung by capitalism’s illegality, blat became a medium of exchange and a mechanism of bribery. People who were successful at accumulating and using social resources were called blatnoys. The blatnoy elite drove their society into corruption and, ultimately, failure. But… that’s irrelevant to American capitalism, right?

Well, no. Sadly, corporate capitalism is not run by “entrepreneurs” in any sense of the word. Being an entrepreneur is about putting capital at risk to achieve a profit. Someone who gets into an elite college because a Senator owes his parents a favor, spends four years in investment banking getting the best projects because of family contacts, gets into a top business school because his uncle knows disgusting secrets about the dean of admissions, and then is hired into a high position in a smooth-running corporation or private equity firm, is not an entrepreneur. Anything but. That’s a glorified private-sector bureaucrat at best and, at worst, a brazen, parasitic trader of illicit social resources.

There are almost no entrepreneurs in the American upper class. This claim may sound bizarre, but first we must define terms– namely, “upper class”. Rich people are not automatically upper class. Steve Jobs was a billionaire but never entered it; he remained middle-class (in social position, not wealth) his entire life. His children, if they want to enter its lower tier, have a shot. Bill Gates is lower-upper class at best, and has worked very hard to get there. Money alone won’t buy it, and entrepreneurship is (by the standards of the upper class) the least respectable way to acquire wealth. Upper class is about social connections, not wealth or income. It’s important to note that being in the upper class does not require a high income or net worth; it does, however, require the ability to secure a position of high income reliably, because the upper class lifestyle requires (at a minimum) $300,000 after tax, per person, per year.

The wealth of the upper class follows from social connection, and not the other way around. Americans frequently make the mistake of believing (especially when misled on issues related to taxation and social justice) that members of the upper class who earn seven- and eight-digit salaries are scaled-up versions of the $400,000-per-year, upper-middle-class neurosurgeon who has been working intensely since age 4. That’s not the case. The hard-working neurosurgeon and the well-connected parasite are diametric opposites, in fact. They have nothing in common and could not stand to be in the same room together. their values are at odds. The upper class views hard work as risky and therefore a bit undignified. It perpetuates itself because there is a huge amount of excess wealth that has congealed at the apex of society, and it’s relatively easy to exchange money and blat on an informal but immensely pernicious market.

Consider the fine art of politician bribery. The cash-for-votes scenario, as depicted in the movies, is actually very rare. The Bush family did have their their “100k club” when campaign contributions were limited to $1000-per-person, but entering that set required arranging for 100 people to donate the maximum amount. Social effort was required to curry favor, not merely a suitcase full of cash. Moreover, to walk into even the most corrupt politician’s office today offering to exchange $100,000 in cash for voting a certain way would be met with a nasty reception. Most scumbags don’t realize that they’re scumbags, and to make a bribe as overt as that is to call a politician a scumbag. Instead, politicians must be bribed in more subtle manners. Want to own a politician? Throw a party every year in Aspen. Invite up-and-coming journalists just dying to get “sources”. Then invite a few private-equity partners so the politician has a million-dollar “consulting” sinecure waiting if the voters wise up and fire his pasty ass. Invite deans of admissions from elite colleges if he has school-age children. This is an effective strategy for owning (eventually) nearly all of America’s decision makers; but it’s hard to pull off if you don’t own any of them. What I’ve described is the process of earning interest on blat and, if it’s done correctly and without scruples, the accrual can occur rapidly– for people with enough blat to play.

Why is such “blat bribery” so common? It makes sense in the context of the mediocrity of American society. Despite the image of upper management in large corporations as “entrepreneurial”, they’re actually not entrepreneurs at all. They’re not the excellent, the daring, the smartest, or the driven. They’re successful social climbers; that’s all. The dismal and probably terminal mediocrity of American society is a direct result of the fact that (outside of some technological sectors) it is incapable of choosing leaders, so decisions of leadership often come down to who holds the most blat. Those who thrive in corporate so-called capitalism are not entrepreneurs but the “beetle-like” men who thrived in the dystopia described in George Orwell’s 1984.

Speaking of this, what is corporate “capitalism”? It’s neither capitalism nor socialism, but a clever mechanism employed by a parasitic, socially-closed but internally-connected elite to provide the worst of both systems (the fall-flat risk and pain of capitalism, the mediocrity and procedural retardation of socialism) while providing the best (the enormous rewards of capitalism, the cushy safety of socialism) of both for themselves.

These well-fed, lily-livered, intellectually mediocre blatnoys aren’t capitalists or socialists. They’re certainly not entrepreneurs. Why, then, do they adopt the language and image of alpha-male capitalist caricatures more brazen than even Ayn Rand would write? It’s because entrepreneurship is a middle-class virtue. The middle class of the United States (for not bad reasons) still has a lot of faith in capitalism. Upper classes know that they have to seem deserving of their parasitic hyperconsumption, and to present the image of success as perceived by the populace at large. Corporate boardrooms provide the trappings they require for this. If the middle class were to suddenly swing toward communism, these boardroom blatnoys would be wearing red almost immediately.

Sadly, when one views the social and economic elite of the United States, one sees blatnoys quite clearly if one knows where to look for them. Fascists, communists, and the elites of corporate capitalism may have different stated ideologies, but (just as Stephen King expressed that The Stand‘s villain, Randall Flagg, can represent accurately any tyrant) they’re all basically the same guy.

Criminal Injustice: The Bully Fallacy

As a society, we get criminal justice wrong. We have an enormous number of people in U.S. prisons, often for crimes (such as nonviolent drug offenses) that don’t merit long-term imprisonment at all. Recidivism is shockingly high as well. On the face of it, it seems obvious that imprisonment shouldn’t work. Imprisonment is a very negative experience, and a felony conviction has long-term consequences for people who are already economically marginal. The punishment is rarely appropriately matched to the crime, as seen in the (racially charged) discrepancies in severity of punishment for possession of crack vs. cocaine. What’s going on? Why are we doing this? Why are the punishments inflicted on those who fail in society often so severe?

I’ll ignore the more nefarious but low-frequency ills behind our heavy-handed justice system, such as racism and disproportionate fear. Instead, I want to focus on a more fundamental question. Why do average people, with no ill intentions, believe that negative experiences are the best medicine for criminals, despite the overwhelming amount of evidence that most people behave worst after negative experiences? I believe that there is a simple reason for this. The model that most people have for the criminal is one we’ve seen over and over: The Bully.

A topic of debate in the psychological community is whether bullies suffer from low or high self-esteem. Are they vicious because they’re miserable, or because they’re intensely arrogant to the point of psychopathy? The answer is both: there are low-self-esteem bullies and high-self-esteem bullies, and they have somewhat different profiles. Which is more common? To answer this, it’s important to make a distinction. With physical bullies, usually boys who inflict pain on people because they’ve had it done to themselves, I’d readily believe that low self-esteem is more common. Most physical bullies are exposed to physical violence either by a bigger bully or by an abusive parent. Also, physical violence is one of the most self-damaging and risky forms of bullying there is. Choosing the wrong target can put the bully in the hospital, and the consequences of being caught are severe. Most physical bullies are, on account of their coarse and risky means of expression, in the social bottom-20% of the class of bullies. On the whole, and especially when one includes adults in the set, most bullies are social bullies. Social bullies include “mean girls”, office politickers, those who commit sexual harassment, and gossips who use the threat of social exclusion to get their way. Social bullies may occasionally use threats of physical violence, usually by proxy (e.g. a threat of attack by a sibling, romantic partner, or group) but their threats generally involve the deployment of social resources to inflict humiliation or adversity on other people. In the adult world, almost all of the big-ticket bullies are social bullies.

Physical bullies are split between low- and high-self-esteem bullies. Social bullies, the only kind that most people meet in adult life, are almost always high-self-esteem bullies, and often get quite far before they are exposed and brought down. Some are earning millions of dollars per year, as successful contenders in corporate competition. Low self-esteem bullies tend to be pitied by those who understand them, which is why most of us don’t have any desire to hunt down the low self-esteem bullies who bothered us as children. It’s high self-esteem bullies that gall people the most. High self-esteem bullies never show remorse, often are excellent at concealing the damage they do, even to the point of bringing action consequences of their actions to the bullied instead of to themselves, and they generally become more effective as they get older. It’s easy to detest them; it would be unusual not to.

How is the high self-esteem bully relevant to criminal justice? At risk of being harsh, I’ll assert what most people feel regarding criminals in general, because for high-self-esteem bullies it’s actually true: the best medicine for a high self-esteem bully is an intensely negative and humiliating experience, one that associates undesirable and harmful behaviors with negative outcomes. This makes high-self-esteem bullies different from the rest of humanity. They are about 3 percent of the population, and they are improved by negative, humiliating experiences. The other 97 percent are, instead, made worse (more erratic, less capable of socially desirable behavior) by negative experiences.

The most arrogant people only respond to direct punishment, because nothing else (reward or punishment) can matter to them, coming from people who “don’t matter” in their minds. Rehabilitation is not an option, because such people would rather create the appearance of improvement (and become better at getting away with negative actions) than actually improve themselves. The only way to “matter” to such a person is to defeat him. If the high-self-esteem bully’s negative experiences are paralyzing, all the better.

Before going further, it’s important to say that I’m not advocating a massive release of extreme punishment on the bullies of the world. I’m not saying we should make a concerted effort punish them all so severely as to paralyze them. There are a few problems with that. First, it’s extremely difficult to determine, on an individual basis, a high self-esteem bully from a low-self-esteem one, and inflicting severe harm on the latter kind will make him worse. Humiliating a high-self-esteem bully punctures his narcissism and hamstrings him, but doing so to a low-self-esteem bully accelerates his self-destructive addiction to pain (for self and others) and leads to erratic, more dangerous behaviors. What comes to mind is the behavior of Carl in Fargo: he begins the film as a “nice guy” criminal but, after being savagely beaten by Shep Proudfoot, he becomes capable of murder. In practice, it’s important to know which kind of bully one is dealing with before deciding whether the best response is rehabilitation (for the low self-esteem bully) or humiliation (for the high self-esteem bully). Second, if bullying were associated with extreme punishments, the people who’d tend to be attracted to positions able to affix the “bully” label would be, in reality, the worst bullies (i.e. a witch hunt). That high self-esteem bullies are (unlike most people) improved by negative experience is a fact that I believe few doubt, but “correcting” this class of people at scale is a very hard problem, and doing so severely involves risk of morally unacceptable collateral damage.

How does this involve our criminal justice policy? Ask an average adult to name the 3 people he detests most among those he personally knows, and it’s very likely that all will be high self-esteem bullies, usually (because physical violence is rare among adults) of the social variety. This creates a template to which “the criminal” is matched. We know, as humans, what should be done to high-self-esteem bullies: separation from their social resources in an extremely humiliating way. Ten years of extremely limited freedom and serious financial consequences, followed by a lifetime of difficulty securing employment and social acceptance. For the office politicker or white-collar criminal, that works and is exactly the right thing. For the small-time drug offender or petty thief? Not so much. It’s the wrong thing.

Most caught criminals are not high self-esteem bullies. They’re drug addicts, financially desperate people, sufferers of severe mental illnesses, and sometimes people who were just very unlucky. To the extent that there are bullies in prison, they’re mostly the low-self-esteem kind– the underclass of the bullying world, because they got caught, if for no other reason. Inflicting negative experiences and humiliation on such people does not improve them. It makes them more desperate, more miserable, and more likely to commit crimes in the future.

I’ve discussed, before, why Americans so readily support the interests of the extremely wealthy. Erroneously, they believe the truly rich ($20 million net worth and up) to be scaled-up versions of the most successful members of the middle class. They conflate the $400,000-per-year neurosurgeon who has been working hard since she was 5 with the parasite who earns $3 million per year “consulting” with a private equity firm on account of his membership in a socially-closed network of highly-consumptive (and socially negative) individuals. Conservatives mistake the rich for the highly productive because, within the middle class, this correlation of economic fortune and productivity makes some sense, while it doesn’t apply at all to society’s extremes. The same is at hand in the draconian approach this country takes to criminal justice. Americans project the faces of the bullies onto the criminal, assuming society’s worst actors and most dangerous failures to be scaled-up version of the worst bullies they’ve dealt with. They’re wrong. The woman who steals $350 of food from the grocery store out of desperation is not like the jerk who stole kids’ lunch money for kicks, and the man who kills someone believing God is telling him to do so (this man will probably require lifetime separation from society, for non-punitive reasons of public safety and mental-health care) is not a scaled-up version of the playground bully.

In the U.S., the current approach isn’t working, of course, unless its purpose is to “produce” more prisoners (“repeat customers”). Few people are improved by prison, and far fewer are helped by the extreme difficulty that a felony conviction creates in the post-incarceration job search. We’ve got to stop projecting the face of The Bully onto criminals– especially nonviolent drug offenders and mentally ill people. Because right now, as far as I can tell, we are The Bully. And reviewing the conservative politics of this country’s past three decades, along with its execrable foreign policy, I think there’s more truth in that claim than most people want to admit.

What made Steve Jobs rare?

Steve Jobs was one of our generation’s best innovators, if not the best. What he represented was singular and rare: a person in charge of a major company who actually had a strong and socially positive vision. Corporate executives are expected to have some quantity of vision and foresight, but so very few, across the Fortune 1000, ever actually have it that there is a certain surprise associated with learning of one who does. Most corporate executives are mediocrities if not negative in their contribution, meddling with those who are actually getting work done with constant churn and (in the negative sense) disruption. Steve Jobs, on the other hand, was integral to the success of Apple. As Apple’s posthumous tribute to him said, only he could have built that company. Unlike the typical businessman, Jobs was not especially charismatic. He was an effective salesman only because the quality of what he sold was so high; he could sell because he believed in his products. What he was is creative, courageous, disciplined, and effective. He had a sharp aesthetic sense, but also the clarity of vision to ship a real, working product.

Why do people like him appear only a few times in a generation? It’s not that there is a lack of talent. Not to denigrate Steve Jobs, because his talents are in any case uncommon, but I’d bet heavily (on statistical grounds) that there are at least a few hundred or thousand people like him out there, hacking away at long-shot startups, cloistered in academia, or possibly toiling in corporate obscurity. The issue is that people with his level of talent almost never succeed in human organizations such as large corporations. A keen aesthetic sense is a severe liability in most corporate jobs, as the corporate style of “professionalism” requires tolerance of ugliness rather than the pursuit of aesthetic integrity at all costs. Creative talent also becomes a negative in environments that expect years of unfulfilling dues-paying before one gets creative control of anything. People like Jobs can’t stand to waste time and so, in corporate America, they rarely make it. When they are set to boring work, the part of their brains not being used scream at them and that makes it impossible for them to climb the ladder.

That human organizations are poor at selecting leaders is well-known, and startups are an often-cited antidote to this problem. The issue there is that a herd mentality still exists in venture capital and startup press. The herd almost never likes people of high talent, and for every independent thinker in a true king-making role, there are twenty overfed bureaucrats making decisions based on “track record” and similarity to existing successes– a fact not in the favor of a 22-year-old Steve Jobs in 2011. To be honest, I don’t think such a person would have a chance in hell of acquiring venture capital funding on his own, unless born into the necessary connections. Elite incubators such as Y Combinator are solving this problem, and quite well, by connecting young talent with the necessary resources and connections. Are they doing enough? Will they succeed in changing the dynamics of startup funding and traction? I don’t have the foresight to answer that question yet; honestly, I have no idea. Time will tell.

I think a lot of people around my age (28) have spent some time thinking: How can I be more like Steve Jobs? That’s the wrong question to be asking. The perfect storm that enables even a moderately creative person (much less an out-and-out disruptive innovator like Jobs) to overcome the enormous obstacles human organizations throw at them is an event that occurs less often than a $100-million lottery windfall. The right question is closer to this: what can I do that makes people with immense creative talents, like Steve Jobs, more likely to succeed? So this, I believe, is a more reliable path to success and an indirect sort of greatness. It’s great to have dreams and work hard to achieve them, but it’s equally noble (and more often successful) to help others with great ideas realize theirs. Right now, most people with great ideas almost always linger in obscurity, with powerful people and institutions actively working to keep them there. That has been the case for nearly all of human history, but technology can change it.

How? I’m not sure. I spend a lot of time thinking about this issue. How can a world with so much creative talent in it (on account of having nearly 7 billion living people, at least a billion of whom now have the minimum resources to express creativity and at least a few million among those having the talent) be achieving so little? Why is there so much ugliness, inertia and failure? How do we empower people to change it? These questions are too big for me to answer, at least for now. I’m going to focus on one thing: the process of turning talent into greatness, the former being abundant and the latter being desperately uncommon. How do we get people with high talent to such a degree of skill that they can, as individual contributors, contribute to society substantially– so much so as to overcome the general mediocrity of human institutions?

This is an educational problem, but not one solved by traditional schooling. Greatness doesn’t come from completing assignments or acing tests, obviously. It requires personal initiative, a will to innovate, and courage. Then it requires recognition; people who show the desire to achieve great things should be given the resources to try. It doesn’t seem like it, but I’ve exposed an extremely difficult problem, one that I don’t know how to solve. Educational processes that encourage creativity make it extremely difficult to measure performance, and therefore fail to gain the social trust necessary to propel their pupils into positions of creative control in major organizations. On the other hand, those educational processes in which it’s easy to measure performance generally favor conformity and the ability to “game the system” over useful creative talents or skills. Moreover, there’s a variety of grade inflation that exists far beyond academia whose effect is socially pernicious.

Grade inflation seems like a “feel good” consequence of people being “too nice”, but from a zero-sum economic perspective, it actually reflects a society that stigmatizes failure heavily. If an average performance is rated at 2 (a C grade) on a 0-to-4 scale, then excellence in one course (A, 4.0) cancels out failure (F, 0.0) in another. On the other hand, if the average is 3.2 out of 4, then it takes four excellent grades to cancel out one failure. This makes failing a course substantially more costly. This reflects risk-aversion on the part of the educational system– the student who puts forth a mediocre performance in three courses is superior to one who excels in two but fails the third– and engenders risk-averse behavior on the part of students. That said, I’m not especially concerned with this problem in the educational system, which is far more forgiving of good-faith failure than most human organizations. A failed course can damage one’s average but rarely results in expulsion. I’m more worried about how this mentality plays out in real life.

This style of risk-aversion is reflected in human organizations such as corporations. An employee who has four great years followed by a bad one is likely to be fired for the damaged reputation acquired in that failed fifth year. People are measured according to their worst-case performance (reliability) rather than their best-case capability (talent). This is a problem for a person like Steve Jobs, obviously capable of hitting the highest of the high notes, but liable to show negative contribution (pissing people off) at his worst. It’s also a more general problem that leaves large organizations unable to tap their best people. Why? Those best people tend overwhelmingly to be “high-variance” people– those whose job performance becomes weak if they lose motivation, and who become so passionate about the quality of work that they invariably end up in personality conflicts. Low-variance individuals– generally lacking creativity but capable of sustaining a middling performance for decades, thereby showcasing “professionalism”– tend to win out in their stead. The creatively desolate world we observe in the leadership of major human organizations is a direct result of this.

In some cases, measuring performance at a person’s bottom rather than top makes sense. As Lord Acton said, “Judge talent at its best and character at its worst.” People who behave in a way that is outright unethical have proven themselves not worthy of trust, regardless of their best-case capabilities. On the other hand, a person like Steve Jobs fails in a mainstream corporate environment fails not because he is unethical but because he’s merely difficult. That is what is in error. In general, human organizations develop a toddler-like, black-and-white view in evaluation of their members and thereby lose the ability to discriminate between those who are outright criminal (and should be distrusted, terminated from employment, and possibly punished regardless of their talents) from those who have difficult personalities or who suffer a good-faith failure (a risk one must be able to afford if one wants to achieve anything).

There’s a solution to that problem, but a touchy one. In technology, programmers have taken to the open-source community as a means of building an organization-independent career. This reflects what academia has had for a long time: active encouragement for its members (graduate students and post-docs especially) to build reputations outside of their institutions. This allows people to demonstrate their best-case capabilites to the world at large. Unfortunately, there is an extremely touchy political matter here. Corporations and managers within them would generally prefer that subordinates not dedicate energy to the cultivation of an external reputation, a process that (a) distracts them from their “real work” and dues-paying, and (b) makes them more expensive to retain. Many large companies forbid employees to take consulting work or publish papers for this precise reason.

Now that I’ve laid out a few problems and ideas, I’ve noticed that both time (9:08 am, and I haven’t yet left for work) and word count (1683 and rising) are encouraging me to finish up. For a closing thought, I’ll admit that I don’t have many answers to the “big picture” problems here. I don’t know what it will take to fix the problems of large human organizations that lead to their pervasive mediocrity. I don’t even know if it can be done. Where to focus? Something smaller and tractable, something grass-roots.

Brilliant people, like Steve Jobs, aren’t born fully-fledged like Venus from the sea. Jobs became what he was due to a thousand influences– his friendship with Steve Wozniak, his early successes, later failures, and yet-later successes. That much is obvious. He was always talented but he became great on account of the opportunities he had– the successes and failures that refined his aesthetic sense until (in his later adulthood) it was a finished product. I also believe that there are thousands of people much like him in existence, their talents unexpressed. I don’t think we need to work hard to “find” them. These people are loud and obnoxious enough that this is an easy task. What we need to do, as much as we can, is enable such people to overcome the hurdles imposed by large human organizations more interested in protecting entrenched mediocrity than in pursuing excellence. We need to fight that battle as much as we can. And yet, we must accept that we aren’t likely to get very far. There’s more we need to do.

We need to rethink education. I’m not just talking about schooling. Instead, I’m talking about technology and business and culture. We need to remove from ourselves the notion that education is a “product” to be “consumed” by those rendered economically useless by their youth and inexperience. Education needs to be an ongoing process. It needs to pervade everything we do. Instead of writing code, managing people, or running businesses we need to focus on teaching people what we can, and on learning from them reciprocally. We need to reinvent corporate and organizational cultures outright so that talent is naturally turned into greatness, and so that excellence isn’t a weird, socially counterproductive personality trait but something that all strive toward.

Half of 56 is 28

Steve Jobs just died. His Stanford commencement speech is already legendary, but I’ll add one more link to it. Call me a “fanboy” or a crowd-follower for liking it; it really is worth listening to.

This is a poignant reminder to all of us that we may have less time than we think we do. Society, with this expressed most prominently in the early rounds of the career game, expects young people to deny their mortality– to delay creative expression and fulfilling work to a distant, prosperous future that might never come. Jobs never denied his mortality. He did what he loved. This is the only reasonable approach because, in the end, mortality refuses to be denied.

Taxation without representation: why code maintenance fails– and how to fix it.

The software industry, as a whole, has fallen into a dismal state. Software development is supposed to be a creative endeavor, but often it’s not. An overwhelming portion (more than 80% on mature codebases) of software engineering time is spent on software “maintenance” instead of new invention, a morale-killer that contributes to the high burnout rate in software engineering. In spite of these sacrifices on the part of developers, code quality seems nonetheless to decline inexorably, to the point where many software projects become intractably unfixable after a certain amount of time. Not only is maintenance painful, but it doesn’t seem to be working very well. What causes this? How does it happen? There are many factors, but I think the biggest problem the software industry faces is organizational: by failing to allocate maintenance responsibilities properly, the result is that these jobs are done poorly and the legacy/maintenance load becomes enormous.

Before reading further, I ask the reader to peruse Jeff Atwood’s excellent blog post, a header for what I’m about to write. In my mind, the most important point he makes is this one: We should probably have our best developers doing software maintenance, not whoever draws the shortest straw. 

Code maintenance, as usually practiced, is dull and lacks creativity. It’s also usually a career tarpit, because new invention is generally superior (from a developer’s perspective) in terms of visibility, education and advancement. Maintenance work rarely delivers on any of these. Maintaining one’s own code is a crucial educational experience, in which the developer sees first-hand the consequences of her architectural decisions. Maintaining the monstrosities created by others? It’s not much of a learning experience after the first time it’s done, and it’s rarely visible either. This incentive structure is perverse, because a working, healthy system that fulfills known, existing needs (i.e. the outcome of a successful maintenance project) is of more value than a new one that may or may not prove to be useful.

Maintenance is extremely important work; it’s the upkeep of one of a software company’s most important assets. Unfortunately, certain systemic aspects of organizational cultures make it unlikely to be rewarded in accord with its importance. The consequence is that maintenance work tends to “roll downhill” until it reaches junior programmers who lack the leverage to work on anything else. The result of this is that maintenance projects are usually done by people who don’t have the organizational knowledge, level of skill, or, most importantly, the political pull to do the job properly.

Inept or even average-case software maintenance fails to improve the general health of the code. A few bugs may be fixed or reactive features added, but the overall health of the software tends to decline. Methods become longer, dependencies are added, junk parameters are added to existing functions, and the architectural intentions behind the original system are buried under thousands of hacks. Code maintenance as practiced tends to beget an even greater maintenance burden in the future.

What needs to change? There’s a simple answer. Give maintainers power. The reason why the vast majority of software maintenance projects fail to improve code health whatsoever is that the developers in a maintenance role (usually junior programmers with no clout) lack the political pull to do their jobs properly. Telling a more senior developer, who was “promoted away” from maintenance duties years ago, to drop everything and document prior work is generally not an option for the maintenance programmer. Nor is removing ill-advised functionality from interfaces (someone more senior is guaranteed to “need” that function) or declaring a project in need of a rewrite. This is the sort of clout that a maintainer needs to have in order to have any shot whatsoever at succeeding.

There’s a flip side of this. If code maintenance work implies power, it absolutely shouldn’t be delegated to junior developers who can’t yet be trusted with that kind of power.

Giving software maintainers power will remove most of the stigma associated with such projects, seeing as they are hated because of the lack of power they often entail. It’s taxation without representation. Often, the maintainer’s job is specified in uninspiring and untenable terms like, “Make it work exactly like the old system, but faster”. Bug-for-bug replications, as well as faithful reproductions of any software without a spec, are doomed to failure. Improving code health necessitates imposing change on client developers. Rarely are such costs considered tolerable for the sake of “mere” upkeep, and maintenance projects usually, for that reason, fail to provide lasting improvement. The low rate of success observed on such projects makes them undesirable, and therefore they are avoided by the most skilled developers.

On the other hand, if maintenance responsibilities were coupled with sufficient power to restore the health of the codebase, such projects could become attractive enough that those with skill and seniority would want to do them. People who work on a daily basis with software generally develop a deep interest in the system’s health, and this alone can be motivation to commit time to its upkeep. Software maintenance has the reputation of grunt work, but it really isn’t intrinsically unpleasant for all developers; there are people who would enjoy such projects immensely if they were properly structured.

On structure, it’s critical that a company of any size separate the “software engineer” job description (which currently entails creative, new invention as well as maintenance) into two separate job descriptions. One (I’ll call it the “software architect” track) exists for the creative engineers who tend to fail at maintenance efforts because of their inability to tolerate ugliness. The other (I’ll call it the “maintenance engineer” track) exists for the “Sherlock Holmes” sort who enjoy deciphering ineptly- or even hostilely-written code. Since qualified maintenance engineers are uncommon, and since most developers find that work undesirable, maintenance engineers should be paid highly. I would argue that a 30% premium is not unreasonable. Moreover, they must be given social status and clout, since their job is to be pains in the asses of developers who are potentially more senior than they are. If maintenance projects are structured and allocated in this way, it’s very unlikely that they will dominate engineering time as they do now.

One thing I would argue against, however, is any suggestion that the maintenance engineer’s existence should absolve developers of the responsibility for maintaining their own code. It doesn’t. Maintenance engineers may be given a supervisory role in code and design review, but maintenance of one’s own work should remain each developer’s own responsibility; otherwise, “moral hazard” kicks in. I will also note that it is a terrible practice for developers to be “promoted away” from maintenance responsibilities, those being delegated to less important folk, as a reward for a successful launch. If the launch’s success came at the expense of enormous “technical debt” (the interest on which is usurious) then a maintenance time-bomb has been built.

I’ve addressed the social changes required in order to allow successful software maintenance. What will this entail in terms of project structure? In-place maintenance of defective systems– any software module that requires full-time maintenance effort fits the bill for “defective”– will become rarer, with outright replacement more common. That’s a good thing, because software either enervates or empowers and the former kind can rarely be upgraded to the latter through incremental changes. Note here that I’m advocating replacement, not “complete rewrites”. In my opinion, total rewrites (unless specific efforts are made to avoid prior mistakes) are often as inadvisable as in-place maintenance; the latter keeps defective software on life support, but the former practice often generates new defects– often more flaws, on account of the time pressure that complete-rewrite projects usually face.

What is gained in doing all this? First of all, job satisfaction among software engineers improves dramatically. Instead of the industry being characterized by a gloomy, never-ending maintenance slog, the proper allocation of such work ensures that it’s done properly and quickly rather than being passed from one hapless junior developer to another. Second, engineer productivity goes through the roof, because engineers with creative and architectural talents can unlock them instead of being forced to contend with legacy misery. Third, employee retention is improved not only for individual software companies but for the industry as a whole. Instead of architecturally-inclined engineers being burned out in droves (I’ll note that people with strong architectural talents tend to be notoriously intolerant of ugliness, to a point that can be emotionally and intellectually crippling in software maintenance) by the legacy burden that is the current reality of the industry, they’re encouraged to make a career of it, the result being better software architecture across the board. The result of these improvements in architectural know-how will be better products– more comprehensible, more attractive, easier to improve and keep up, and more profitable.

Object Disoriented Programming

It is my belief that what is now called “object-oriented programming” (OOP) is going to go down in history as one of the worst programming fads of all time, one that has wrecked countless codebases and burned millions of hours of engineering time worldwide. Though a superficially appealing concept– programming in a manner that is comprehensible the “big picture” level– it fails to deliver on this promise, it usually fails to improve engineer productivity, and it often leads to unmaintainable, ugly, and even nonsensical code.

I’m not going to claim that object-oriented programming is never useful. There would be two problems with such a claim. First, OOP means many different things to different people– there’s a parsec of daylight between Smalltalk’s approach to OOP and the abysmal horrors currently seen in Java. Second, there are many niches within programming with which I’m unfamiliar and it would be arrogant to claim that OOP is useless in all of them. Almost certainly, there are problems for which the object-oriented approach is one of the more useful. Instead, I’ll make a weaker claim but with full confidence: as the default means of abstraction, as in C++ and Java, object orientation is a disastrous choice.

What’s wrong with it?

The first problem with object-oriented programming is mutable state. Although I’m a major proponent of functional programming, I don’t intend to imply that mutable state is uniformly bad. On the contrary, it’s often good. There are a not-small number of programming scenarios where mutable state is the best available abstraction. But it needs to be handled with extreme caution, because it makes code far more difficult to reason about than if it is purely functional. A well-designed and practical language generally will allow mutable state, but encourages it to be segregated only into places where it is necessary. A supreme example of this is Haskell, where any function with side effects reflects the fact in its type signature. On the contrary, modern OOP encourages the promiscuous distribution of mutable state, to such a degree that difficult-to-reason-about programs are not the exceptional rarity but the norm. Eventually, the code becomes outright incomprehensible– to paraphrase Boromir, “one does not simply read the source code”– and even good programmers (unknowingly) damage the codebase as they modify it, adding complexity without full comprehension. These programs fall into an understood-by-no-one state of limbo and become nearly impossible to debug or analyze: the execution state of a program might live in thousands of different objects!

Object-oriented programming’s second failing is that it encourages spaghetti code. For an example, let’s say that I’m implementing the card game, Hearts. To represent cards in the deck, I create a Card object, with two attributes: rank and suit, both of some sort of discrete type (integer, enumeration). This is a struct in C or a record in Ocaml or a data object in Java. So far, no foul. I’ve represented a card exactly how it should be represented. Later on, to represent each player’s hand, I have a Hand object that is essentially a wrapper around an array of cards, and a Deck object that contains the cards before they are dealt. Nothing too perverted here.

In Hearts, the person with the 2 of clubs leads first, so I might want to determine in whose hand that card is. Ooh! A “clever” optimization draws near! Obviously it is inefficient to check each Hand for the 2 of clubs. So I add a field, hand, to each Card that is set when the card enters or leaves a player’s Hand. This means that every time a Card moves (from one Hand to another, into or out of a Hand) I have to touch the pointer– I’ve just introduced more room for bugs. This field’s type is a Hand pointer (Hand* in C++, just Hand in Java). Since the Card might not be in a Hand, it can be null sometimes, and one has to check for nullness whenever using this field as well. So far, so bad. Notice the circular relationship I’ve now created between the Card and Hand classes.

It gets worse. Later, I add a picture attribute to the Card class, so that each Card is coupled with the name of an image file representing its on-screen appearance, and ten or twelve various methods for the number of ways I might wish to display a Card. Moreover, it becomes clear that my specification regarding a Card’s location in the game (either in a Hand or not in a Hand) was too weak. If a Card is not in a Hand, it might also be on the table (just played to a trick), in the deck, or out of the round (having been played). So I rename the hand attribute, place, and change its type to Location, from which Hand and Deck and PlaceOnTable all inherit.

This is ugly, and getting incomprehensible quickly. Consider the reaction of someone who has to maintain this code in the future. What the hell is a Location? From its name, it could be (a) a geographical location, (b) a position in a file, (c) the unique ID of a record in a database, (d) an IP address or port number or, what it actually is, (e) the Card’s location in the game. From the maintainer’s point of view, really getting to the bottom of Location requires understanding Hand, Deck, and PlaceOnTable, which may reside in different files, modules, or even directories. It’s just a mess. Worse yet, in such code the “broken window” behavior starts to set in. Now that the code is bad, those who have to modify it are tempted to do so in the easiest (but often kludgey) way. Kludges multiply and, before long, what should have been a two-field immutable record (Card) has 23 attributes and no one remembers what they all do.

To finish this example, let’s assume that the computer player for this Hearts game contains some very complicated AI, and I’m investigating a bug in the decision-making algorithms. To do this, I need to be able to generate game states as I desire as test cases. Constructing a game state requires that I construct Cards. If Card were left as it should be– a two-field record type– this would be a very easy thing to do. Unfortunately, Card now has so many fields, and it’s not clear which can be omitted or given “mock” values, that constructing one intelligently is no longer possible. Will failing to populate the seemingly irrelevant attributes (like picture, which is presumably connected to graphics and not the internal logic of the game) compromise the validity of my test cases? Hell if I know. At this point, reading, modifying, and testing code becomes more about guesswork than anything sound or principled.

Clearly, this is a contrived example, and I can imagine the defenders of object-oriented programming responding with the counterargument, “But I would never write code that way! I’d design the program intelligently in advance.” To that I say: right, for a small project like a Hearts game; wrong, for real-world, complex software developed in the professional world. What I described  is certainly not how a single intelligent programmer would code a card game; it is indicative of how software tends to evolve in the real world, with multiple developers involved. Hearts, of course, is a closed system: a game with well-defined rules that isn’t going to change much in the next 6 months. It’s therefore possible to design a Hearts program intelligently from the start and avoid the object-oriented pitfalls I intentionally fell into in this example. But for most real-world software, requirements change and the code is often touched by a number of people with widely varying levels of competence, some of whom barely know what they’re doing, if at all. The morass I described is what object-oriented code devolves into as the number of lines of code and, more importantly, the number of hands, increases. It’s virtually inevitable.

One note about this is that object-oriented programming tends to be top-down, with types being subtypes of Object. What this means is that data is often vaguely defined, semantically speaking. Did you know that the integer 5 doubles as a DessertToppingFactoryImpl? I sure didn’t. An alternative and usually superior mode of specification is bottom-up, as seen in languages like Ocaml and Haskell. These languages offer simple base types and encourage the user to build more complex types from them. If you’re unsure what a Person is, you can read the code and discover that it has a name field, which is a string, and a birthday field, which is a Date. If you’re unsure what a Date is, you can read the code and discover that it’s a record of three integers, labelled year, month, and day. If you want to get “to the bottom” of a datatype or function when types are built from the bottom-up, you can do so, and it rarely involves pinging across so many (possibly semi-irrelevant) abstractions and files as to shatter one’s “flow”. Circular dependencies are very rare in bottom-up languages. Recursion, in languages like ML, can exist both in datatypes and functions, but it’s hard to cross modules with it or create such obscene indirection as to make comprehension enormously difficult. By contrast, it’s not uncommon to find circular dependencies in object-oriented code. In the atrocious example I gave above, Hand depends on Card depends on Location inherits from Hand.

Why does OOP devolve?

Above, I described the consequences of undisciplined object-oriented programming. In limited doses, object-oriented programming is not so terrible. Neither, for that matter, is the much hated “goto” statement. Both of these are tolerable when used in extremely disciplined ways with reasonable and self-evident intentions. Yet when used by any but the most disciplined programmers, OOP devolves into a mess. This is hilarious in the context of OOP’s original promise to business types in the 1990s– that it wound enable mediocre programmers to be productive. What it actually did is create a coding environment in which mediocre programmers (and rushed or indisposed good ones) are negatively productive. It’s true that terrible code is possible in any language or programming paradigm; what makes object orientation such a terrible default abstraction is that, as with unstructured programming, bad code is an asymptotic inevitability as an object-oriented program grows. In order to discuss why this occurs, it’s necessary to discuss object orientation from a more academic perspective, and pose a question to which thousands of answers have been given.

What’s an object? 

On first approximation, one can think of an object as something that receives messages and performs actions, which usually include returning data to the sender of the message. Unlike a pure function, the response to each message is allowed to vary. In fact, it’s often required to do so. The object often contains state that is (by design) not directly accessible, but only observable by sending messages to the object. In this light, the object can be compared to a remote-procedure call (RPC) server. Its innards are hidden, possibly inaccessible, and this is generally a good thing in the context of, for example, a web service. When I connect to a website, I don’t care in the least about the state of its thread pooling mechanisms. I don’t want to know about that stuff, and I shouldn’t be allowed access to it. Nor do I care what sorting algorithm an email client uses to sort my email, as long as I get the right results. On the other hand, in the context of code for which one is (or, at least, might be in the future) responsible for comprehending the internals, such incomplete comprehension is a very bad thing.

To “What is an object?” the answer I would give is that one should think of it as a miniature RPC server. It’s not actually remote, nor as complex internally as a real RPC or web server, but it can be thought of this way in terms of its (intentional) opacity. This shines light on whether object-oriented programming is “bad”, and the question of when to use objects. Are RPC servers invariably bad? Of course not. On the other hand, would anyone in his right mind code Hearts in such a way that each Card were its own RPC server? No. That would be insane. If people treated object topologies with the same care as network topologies, a lot of horrible things that have been done to code in the name of OOP might never have occurred.

Alan Kay, the inventor of Smalltalk and the original conception of “object-oriented programming”, has argued that the failure of what passes for OOP in modern software is that objects are too small and that there are too many of them. Originally, object-oriented programming was intended to involve large objects that encapsulated state behind interfaces that were easier to understand than the potentially complicated implementations. In that context, OOP as originally defined is quite powerful and good; even non-OOP languages have adopted that virtue (also known as encapsulation) in the form of modules.

Still, the RPC-server metaphor for “What is an Object?” is not quite right, and the philosophical notion of “object” is deeper. An object, in software engineering, should be seen as a thing which the user is allowed to have (and often supposed to have) incomplete knowledge. Incomplete knowledge isn’t always a bad thing at all; often, it’s an outright necessity due to the complexity of the system. For example, SQL is a language in which the user specifies an ad-hoc query to be run against a database with no indication of what algorithm to use; the database system figures that out. For this particular application, incomplete knowledge is beneficial; it would be ridiculous to burden everyone who wants to use a database with the immense complexity of its internals.

Object-orientation is the programming paradigm based on incomplete knowledge. Its purpose is to enable computation with data of which the details are not fully known. In a way, this is concordant with the English use of the word “object” as a synonym for “thing”: it’s an item of which one’s knowledge is incomplete. “What is that thing on the table?” “I don’t know, some weird object.” Object-oriented programming is designed to allow people to work with things they don’t fully understand, and even modify them in spite of incomplete comprehension of it. Sometimes that’s useful or necessary, because complete knowledge of a complex program can be humanly impossible. Unfortunately, over time the over-tolerance of incomplete knowledge leads to an environment where important components can elude the knowledge of each individual responsible for creating them; the knowledge is strewn about many minds haphazardly.

Modularity

Probably the most important predictor of whether a codebase will remain comprehensible as it becomes large is whether it’s modular. Are the components individually comprehensible, or do they form an irreducibly complex tangle of which it is required to understand all of it (which may not even be possible) before one can understand any of it? In the latter case, software quality grinds to a halt, or even backslides, as the size of the codebase increases. In terms of modularity, the object oriented paradigm generally performs poorly, facilitating the haphazard growth of codebases in which answering simple questions like “How do I create and use a Foo object?” can require days-long forensic capers.

The truth about “power”

Often, people describe programming techniques and tools as “powerful”, and that’s taken to be an endorsement. A counterintuitive and dirty secret about software engineering “power” is not always a good thing. For a “hacker”– a person writing “one-off” code that is unlikely to ever be require future reading by anyone, including the author– all powerful abstractions, because they save time, can be considered good. However, in the more general software engineering context, where any code written is likely to require maintenance and future comprehension, power can be bad. For example, macros in languages like C and Lisp are immensely powerful. Yet it’s obnoxiously easy to write incomprehensible code using these features.

Objects are, likewise, immensely powerful (or “heavyweight”) beasts when features like inheritance, dynamic method dispatch, open recursion, et cetera are considered. If nothing else, one notes that objects can do anything that pure functions can do– and more. The notion of “object” is both a very powerful and a very vague abstraction.

“Hackers” like power, in which case a language can be judged based on the power of the abstractions it offers. But real-world software engineers spend an unpleasantly large amount of time reading and maintaining others’ code.  From an engineer’s perspective, a language is good based on what it prevents other programmers from doing to us, those of us who have to maintain their code in the future. In this light, the unrestrained use of Lisp macros and object-oriented programming is bad, bad, bad. From this perspective, a language like Ocaml or Haskell– of middling power but beautifully designed to encourage deployment of the right abstractions– is far better than a more “powerful” one like Ruby.

As an aside, a deep problem in programming language design is that far too many languages are designed with the interests of code writers foremost in mind. And it’s quite enjoyable, from a writer’s perspective, to use esoteric metaprogramming features and complex object patterns. Yet very few languages are designed to provide a beautiful experience for readers of code. In my experience, ML does this best, and Haskell does it well, while most of the mainstream languages fall short of being even satisfactory. In most real-world software environments, reading code is so unpleasant that it hardly gets done at all with any detail. Object-oriented programming, and the haphazard monstrosities its “powerful” abstractions enable, is a major culprit.

Solution?

The truth, I think, about object-oriented programming is that most of its core concepts– inheritance, easy extensibility of data, proliferation of state– should be treated with the same caution and respect that a humble and intelligent programmer gives to mutable state. These abstractions can be powerful and work beautifully in the right circumstances, but they should be used very sparingly and only by people who understand what they are actually doing. In Lisp, it is generally held to be good practice never to write a macro when a function will do. I would argue the same with regard to object-oriented programming: never write an object-oriented program for a problem where a functional or cleanly imperative approach will suffice.  Certainly, to make object orientation the default means of abstraction, as C++ and Java have, is a proven disaster.

Abstractions, and especially powerful ones, aren’t always good. Using the right abstractions is of utmost importance. Abstractions that are too vague, for example, merely clutter code with useless nouns. As the first means of abstraction in high-level languages, higher-order functions suffice most of the time– probably over 95% of the time, in well-factored code. Objects may come into favor for being more general than higher-order functions, but “more general” also means less specific, and for the purpose of code comprehension, this is a hindrance, not a feature. If cleaner and more comprehensible pure functions and algebraic data types can be used in well over 90 percent of the places where objects appear in OO languages, they should be used in lieu of objects, and they should be supported by languages– which C++ and Java don’t do.

In a better world, programmers would be required to learn how to use functions before progressing to objects, and object-oriented features would hold the status of being available but deployed only when needed, or in the rare cases where such features make remarkably better code. To start, this change needs to come about at a language level. Instead of Java or C++ being the first languages to which most programmers are introduced, that status should be shifted to a language like Scheme, Haskell, or my personal favorite for this purpose: Ocaml.

Why isn’t the U.S. innovating? Some answers.

This post is in direct response to this thread on Hacker News, focused on the question: why isn’t the U.S. building great new things, as much as it used to? There are a number of reasons. I’ll examine a few of them and, in the interest of keeping the discussion short, I’m going to analyze a few of the less-cited ones. The influence of the short-sighted business mentality, political corruption, and psychological risk-aversion on this country’s meager showing in innovation over the past 40 years are well-understood, so I’m going to focus on some of the less well-announced problems.

1. Transport as microcosm

For a case study in national failure, consider human transportation in the United States since 1960. It’s shameful: no progress at all. We’ve become great at sending terabits of data around the globe, and we’re not bad at freight transportation, but we’re awful when it comes to moving people. Our trains are laughable to the extent that we consider premium a level of speed (Acela, at 120 miles per hour) that Europeans just call “trains”. Worse yet, for a family of four, air and rail travel are actually more expensive per mile than the abominably inefficient automobile. As a country, we should be massively embarrassed by the state of human transportation.

Human transportation in the U.S. has an air of having given up. We haven’t progressed– in speed or service or price– since the 1960s. The most common way of getting to work is still a means (automotive) that scales horribly (traffic jams, “rush hour”) and we still use airplanes (instead of high-speed trains) for mid-distance travel, a decision that made some sense in the context of the Cold War but is wasteful and idiotic now. This isn’t just unpleasant and expensive, but also dangerous, in light of the environmental effects of greenhouse gases.

Why so stagnant? The problem is that we have, for the most part, given up on “hard” problems. By “hard”, I don’t mean “difficult” so much as “physical”. As a nation, we’ve become symbolic manipulators, often involved in deeply self- and mutually-referential work, who avoid interacting with physical reality as much as we can. Abstraction has been immensely useful, especially in computing, but it has also led us away from critically important physical “grunt” work to the point where a lot of people never do it.

I don’t mean to imply that no one does that kind of work in the United States. A fair number of people do, but the classes of people who manage large companies have, in almost all cases, never worked in a job that required physical labor rather than simply directing others in what to do. So to them, and to many of us as offices replace factories, the physical world is a deeply scary place that doesn’t play on our terms.

2. Losing the “rest of the best”.

One doesn’t have to look far to find complaints by vocal scientists, researchers, and academics that the best students are being “poached” by Wall Street and large-firm law (“biglaw”) instead of going into science and technology. One nuance that must be attached to that complaint: it’s not true. At least, not as commonly voiced.

The “best of the best” (99.5th percentile and up) still overwhelmingly prefer research and technology over finance. Although very few research jobs match the compensation available to even mediocre performers in finance, the work is a lot more rewarding. Banking is all about making enough money by age 40 never to have to work again; a job with high autonomy (as in research) makes work enjoyable. Moreover, banking and biglaw require a certain conformity that makes a 99.5th-percentile intellect a serious liability. That top investment bankers seem outright stupid from a certain vantage point does not make them easy competition; they are more difficult competition because of their intellectual limitations. So, for these reasons and many more, the best of the best are still becoming professors, technologists, and if sufficiently entrepreneurial, startup founders.

What is changing is that the “rest of the best” have been losing interest in science and research. The decline of scientific and academic job markets has been mild for the best-of-the-best, who are still able to find middle-class jobs and merely have fewer choices, but catastrophic for the rest-of-the-best. When the decision is to be made between a miserable adjunct professorship at an uninspiring university, versus a decent shot at a seven-figure income in finance, the choice becomes obvious.

America loves winner-take-all competitions, so outsized rewards for A players, to the detriment of B players, seems like something the American society ought to considered just and valuable. The problem is that this doesn’t work for the sciences and technology. First, the “idea people” need a lot of support in order to bring their concepts to fruition. The A players are generally poor at selling their vision and communicating why their ideas are useful (i.e. why they should be paid for something that doesn’t look like work) and the B players have better options than becoming second-rate scientists, given how pathetic the scientific and academic careers now are for non-”rock stars”. What is actually happening with regard to the talent spectrum is the emergence of a bimodal distribution. With the filtering out of the B players, academia is becoming a two-class industry split between A and C players because the second-tier jobs are not compelling enough to attract the B players. This two-class dynamic is never good for an industry. In fact, it’s viciously counterproductive because the C players are often so incompetent that their contributions are (factoring in morale costs) negative.

This two-class phenomenon has already happened in computer programming, to distinctly negative effects that are responsible for the generally low quality of software. What I’ve observed is that there are very few middling programmers. The great programmers take  jobs in elite technology companies or found startups. The bad programmers work on uninspiring projects in the bowels of corporate nowhere– back-office work in banks, boring enterprise software, et al. There isn’t much interaction between the two tiers– virtually two separate industries– and with this lack of cross-pollination, the bad programmers don’t get much encouragement to get better. As designing decent, usable software is very challenging for the great programmers, one can imagine what’s created when bad programmers do it.

In reality, the B players are quite important for a variety of reasons. First is that this categorization is far from static and B players often turn into A players as they mature. (This is necessary in order to replace the A players who become lazy after getting tenure in academia, or reaching some comparable platform of comfort in other industries.) Second is that B players are likely to become A players in other domains, later– as politicians and business executives– and it’s far better to have people in those positions of power who are scientifically literate. Third is that a lot of the work in science and technology isn’t glamorous and doesn’t require genius, but does require enough insight and competence as to require at least a B (not C or lower) player. If B players aren’t adequately compensated for this work and therefore can’t be hired into it, such tasks either get passed to A players (taking up time that could be used on more challenging work) or to C players (who do such a poor job that more competent peoples’ time must be employed, in any case, to check and fix their work).

Science, research, and academia are now careers that one should only enter if one has supreme confidence of acquiring “A player” status, because the outcomes for anyone else are abysmal. In the long term, that makes the scientific and research community less friendly to people who may not be technically superior but will benefit the sciences indirectly by enabling cross-linkages between science and the rest of society. The result of this is a slow decline in the status of society and technology as time passes.

3. No one takes out the trash. 

Software companies find that, if they don’t manage their code by removing or fixing low-quality code, they become crippled later by “legacy” code and technical decisions that were reasonable at one time, but proved counterproductive later on. This isn’t only a problem with software, but with societies in general. Bad laws are hard to unwrite, and detrimental interest groups are difficult to refuse once they establish a presence.

Healthcare reform is a critical example of this. President Obama found fixing the murderously broken, private-insurance-based healthcare system to be politically unfeasible due to entrenched political dysfunction. This sort of corruption can be framed as a morality debate; but from a functional perspective, it manifests not as a subjective matter of “bad people” but more objectively as a network of inappropriate relationships and perverse dependencies. In this case, I refer to the interaction between private health insurance companies (which profit immensely from a horrible system) and political officials (who are given incentives not to change it, through the campaign-finance system).

Garbage collection in American society is not going to be easy. Too many people are situated in positions that benefit from the dysfunction– like urban cockroaches, creatures that thrive in damaged environments– and the country now has an upper class defined by parasitism and corruption rather than leadership. Coercively healing society will likely lead to intense (and possibly violent) retribution from those who currently benefit from its failure and who will perceive themselves as deprived if it is ever fixed.

What does this have to do with innovation? Simply put, if society is full of garbage– inappropriate relationships that hamper good decision-making, broken and antiquated policies and regulations, institutions that don’t work, the wrong people in positions of power– then an innovator is forced to negotiate an obstacle course of idiocy in order to get anything done. There just isn’t room if the garbage is let to stay. Moreover, since innovations often endanger people in power, there are some who fight actively to keep the trash in place, or even to make more of it.

4. M&A has replaced R&D.

A person who wants the autonomy, risk-tolerance, and upside potential (in terms of contribution, if not remuneration) of an R&D job is unlikely to find it in the 21st-century, with the practical death of blue-sky research. Few of those jobs exist, many who have them stay “emeritus” forever instead of having the decency to retire and free up positions, and getting one without a PhD from a top-5 university (if not a post-doc) is virtually unheard-of today. Gordon Gekko and the next-quarter mentality have won. These high-autonomy R&D jobs only exist in the context of a marketing expense– a company hiring for a famous researcher for the benefit of saying that he or she works there. Where has the rest of R&D gone? Startups. Instead of funding R&D, large companies are now buying startups, letting the innovation occur on someone else’s risk.

There is some good in this. A great idea can turn into a full-fledged company instead of being mothballed because it cannibalizes something else in the client company’s product portfolio. There is also, especially from the perspective of compensation, a lot more upside in being an entrepreneur than a salaried employee at an R&D lab. All of this said, there are some considerable flaws in this arrangement. First is that a lot of research– projects that might take 5 to 10 years to produce a profitable result– will never be done under this model. Second is that getting funding, for a startup, generally has more to do with inherited social connections, charisma, and a perception of safety in investment, than with the quality of the idea. This is an intractable trait of the startup game because “the idea” is likely to be reinvented between first funding and becoming a full-fledged business. The result of this is that far too many “Me, too” startups and gimmicks get funded and too little innovation exists. Third and most severe is what happens upon failure. When an initiative in an R&D lab fails, the knowledge acquired from this remains within the company. The parts that worked can be salvaged, and what didn’t work is remembered and mistakes are less likely to be repeated. With startups, the business ceases to exist outright and its people dissipate. The individual knowledge merely scatters, but the institutional knowledge effectively ceases to exist.

For the record, I think startups are great and that anything that makes it easier to start a new company should be encouraged. I even find it hard to hate “acqu-hiring” if only because, for the practice’s well-studied flaws, it creates a decent market for late-stage startup companies. All that said, startups are generally a replacement for most R&D. They were never meant to replace in-house innovation.

5. Solutions?

The problems the U.S. faces are well-known, but can this be fixed? Can the U.S. become an innovative powerhouse again? It’s certainly possible, but in the current economic and political environment, the outlook is very poor. Over the past 40 years, we’ve been gently declining rather than crashing and, to the good, I believe we’ll continue doing so, rather than collapsing. Given the dead weight of political conservatism, an entrenched and useless upper class, and a variety of problems with attitudes and the culture, our best realistic hope is slow, relative decline and absolute improvement– that as the world becomes more innovative, so will the U.S. The reason I consider this “improvement-but-relative-decline” both realistic and the best possibility is that a force (such as a world-changing technological innovation) that heals the world can also reverse American decline, but one less powerful than a world-healing force cannot save the U.S. from calamity. It would not be such a terrible outcome. American “decline” is a foregone conclusion– and it’s neither right nor sustainable for so few people to have so much control– but just as the average Briton is better off now than in 1900, and arguably better off than the average American now, this need not be a bad thing.

Clearing out the garbage that retards American innovation is probably not politically or economically feasible. I don’t see it being done in a proactive way; I see the garbage disappearing, if it does, through rot and disintegration rather than aggressive cleanup. But I think it’s valuable, for the future, to understand what went wrong in order to give the next center of innovation a bit more longevity. I hope I’ve done my part in that.

Pride in death

Human attitudes toward death are often negative: the transition is met with fear among many, and outright terror by some. Positive emotions, such as relief from suffering and the hope for something better afterward, are occasionally associated with it, but the general feeling humans feel toward death is a negative one, as if it were an undesirable event that should be procrastinated as much as possible. We fight it until the very end, even though death is guaranteed to win. It’s not our fault that we’re this way; we’re biologically programmed to be so. So we have a deep-seated tendency to put everything we have into keeping death away from us as much as we can. This has a negative side effect: when we cannot hold it back any longer, and when death rushes in, many people around the dying person take an attitude of defeat. This attitude toward death and aging I find very harmful. That said, acknowledging the certainty of death, I’ve often wondered if the process is deserving of a different and somewhat unconventional emotional response: pride, not in the vain sense of the word, but in a self-respecting and upright sense. I don’t mean to suggest that one “should” take such a counterintuitive attitude toward death, but only to suggest the thought experiment surrounding what if one did take pride in one’s mortality. What, exactly, would that look like?

Death is a big deal: an irreversible step into something that is possibly wonderful but certainly unknown. If any process can be called uncertain and risky, death outclasses anything else that we do, by far. Moving to another country? That’s nothing compared to dying. Death is a major transition, and the only reason we do not associate it with courage is because it is completely involuntary and universal: everyone, including both the most and least courageous, must do it. But if lifespans were infinite and death were a choice, it’s not one that many people would make. Death, in this light, could be viewed as immensely courageous.

Before I go any further, it’s important to state that suicide is generally not courageous, at least not in the self-destructive form that is most common in this world. Self-destruction (whether it results in death or merely in lesser forms of ruin) is the ultimate in cowardice. That said, choosing to die for another’s benefit, or to escape life when terminally ill, are different matters, and I don’t consider those deaths “suicides”. Suicide, in the most rash and revolting form, is an overstepping act of self-destruction driven by bad impulses and fear or hatred of one’s own existence. To attempt to give up on existence or eradicate one’s self is not courageous, but that’s not what I’m talking about. When I say that death is courageous, I do not go so far as to say that forcing it to come is a courageous act, but more that offering oneself up for it, if this “offer” were not an inflexible prerequisite for physical existence, would be considered extremely courageous. To venture into what is possibly another world, and possibly nonexistence, with no hope of return? Even with the best possibilities of this journey being far superior to anything this existence can offer, few (even among the religiously devout and unwaveringly faithful) would take it. I’m not even sure if I could bring myself to do it. For a person freed from death’s inevitability, whether or not to die would be a very difficult decision, and probably one that even most religious believers, solid in their belief in an afterlife, would procrastinate for a very long time.

That said, modern society does not view death as a process that may be full of promise. Instead, our society’s attitude toward death is negative and mechanistic, so far as it views death as the ultimate in failure. We describe a car or computer as “dead” when it fails beyond repair, and (accurately, biologically speaking) describe a cell as dead when it can no longer perform necessary biological functions, such as self-repair and reproduction. That which is “dead” has failed beyond hope and is now of such low utility that, on account of its mass and volume, it’s now a burden. This analogy applies to the human body– its failure is the cause of biological death, and it is utterly useless after death– but to the human person? The comparison, I think, is unfair. After a life well-lived, the soul might be in a victorious or brilliant state. We really don’t know. We know that we have to deal with a corpse, and that a person is no longer around to laugh at our jokes, but we haven’t a clue what the experience is like for that person. Being mostly selfish creatures– I make this observation neutrally, and it applies to me as much as anyone else– we reflexively view death as a negative, mainly because of the incredible pain that others’ deaths bring upon us. We don’t know what it’s like to die, but we hate when those we love die.

The image of death in our society is quite negative, and possibly unfairly, but it is natural that a society like ours would despite death. We view the suffering it causes every day, and even if it might have incredible subjective benefits for those who are dying, we never see them (and those who have seen them, if they exist, don’t blog). Our view of dying is even more disparaging. We view death as something that overtakes people after a long and horrible fight that has exhausted them. In the traditional Western view, a person dies when there is nothing left of that person. Dying isn’t treated as the graduation into another state, but the gradual winding down into nothingness, a reversal of the emergence from oblivion that is held to exist before conception. This view of death leads us to view the dying and dead as frail, defeated, failed creatures; rather than beings that have bravely ventured into the unknown– an unknown that may even entail nonexistence.

This attitude of pride in death may seem untenable. As I alluded, can something be courageous when it’s utterly involuntary? I’ll freely admit that such an attitude may seem bizarre. But equally and differently bizarre is the idea (unspoken but implicit in the modern Western attitude toward death, despite being passively rejected by most people in the West) that death certainly leads to nothingness, or to divine judgment; or, for that matter, any claim of certainty regarding what happens after death. For this, it’s the incredible uncertainty in death that makes going into it, in a way, courageous. Or, at least, it must be possible to go into death with courage.

Should death be feared? I would argue “no”. At this point, I venture into a sort of benevolent hypocrisy by saying there is no point in fearing death, since I certainly have not extinguished my fear of it. I know that my death will come, but I certainly don’t want it to come now. I’m not ready. I don’t know when I will be ready; I hope this won’t be the case, but maybe I’ll feel, at age 90, just as unready to die as I feel now at 27. I’ll certainly admit that I have no desire to hasten the process, and share the general desire to prolong my life that almost all humans have. We naturally have a deep-seated fear of anything that reduces our reproductive fitness, and death has this effect in a most dramatic and irreversible way. We also have an intellectual dislike for the concept of nonexistence, even though nonexistence itself cannot possibly be unpleasant. Finally, what most terrifying about death is the possibility of a negative afterlife.

In order to assess whether fear of death is warranted, we have to attack these valid reasons for people to be wary of it. First, on the biological aspects: death does reduce an individual’s reproductive fitness, but dying is also something we’re programmed to do: after a certain point, we age and die. In this light alone, death in advanced age cannot be viewed as a failure; it’s just what human bodies do. On the more cerebral concept of nonexistence, there is not much to say about this other than the fact that there’s no reason to fear this, since it is not experienced but is the absence of experience. I would not like to find out that I am wrong and that there’s nothing after death; luckily, if there is nothing after death, I will never find out. For this reason, to fear nonexistence makes little sense.

Negative afterlife possibilities deserve a bit of discussion. History is littered with peoples’ attempts, many quite successful, to use the uncertainty associated with death to their own benefit, and to gain political power by claiming (under pretense of divine authority) that behaviors they find undesirable will result in extreme and terrifying post-death results, painting a picture of a world run by an insane, malicious, and wrathful God who almost certainly does not exist. I say that such Bronze Age monsters “almost certainly” do not exist because the world makes too much sense for such a being to have created it, and the explanation that this invisible beast was created by a power-hungry person in his own image becomes infinitely more likely. Still, most extant religions contain vestiges of these coercive and perverse behaviors– assertions of divine sadism and vengeance. As a deist who believes one can reason about divinity by observing human existence, I reject such assertions. Filtering out everything in this stuff-people-made-up-to-get-power category, we blast all certain claims to knowledge of the afterlife and are left with moderate-but-inconclusive evidence and deep uncertainty. But there is evidence, if certainly not proof! Subjective experiences of those who have near-death experience suggest a profound and spiritual nature to death– not the fade-out expected of a failing brain before it winds down for good, but a powerful and quite often (but not always) positive experience– and, although in its infancy, research into the matter of reincarnation is promising. What little we know about existence after death suggests that (1) the vengeful gods invented by coercive religions are cartoon characters, not beasts we shall face after death, (2) it is more likely that consciousness persists after death than that it does not, though we do not have, and probably never will have, sufficient knowledge to rule out either, (3) post-death experiences tend to be positive and spiritual, insofar as we can assess them, and (4) that these observations combined with death’s inevitability make it pointless to view death with hatred or fear.

All that said, I don’t think it’s appropriate or useful for me, on this topic, to expound on what I think happens after death, since I don’t really know. In this body, I haven’t done it yet and, once I do, there will be no reliable way for me to report back. For this reason, let’s take a different tack and consider the concept of pride-in-mortality from a pragmatic viewpoint. If one can view one’s impending death with pride and courage instead of fear and hatred, what does that mean while we are still living?

First, to take pride in death allows for it to be an inherently dignified process. Many illnesses and modes of death are horrifying and I wouldn’t wish those on anyone, but the painful process of dying is probably not all there is to death, just as the pain of birth is certainly not the entirety of life. Death itself can be dignified, respected, and even admired. That we will all do it means that we are all dignified creatures. All living things desire happiness, dislike suffering, and will die. The third of these is a deep commonality that deserves respect. Many Buddhists will agree that, since all people are dying at all times, each of us is deserving of compassion. I’ll take it further. Since each of us is going to plunge headlong into deep uncertainty; for this, if nothing else– and some people make it hard to find a single other thing worthy of admiration– each living being deserves to be admired and respected. I am not the first to remark that, in mortality, we are all finally equal.

All this said, a death’s most relevant feature is that it is the end of a life. To make death dignified and to die courageously is good, but these accomplishments should be considered merely consequences of a much greater (and all-encompassing) project: to make life dignified, for everyone, and to live courageously. That is the much harder part, and it does not make sense to approach one project without tackling the other.

The naughty secret behind secret salaries

For the past few days, Hacker News has been lit up with discussions about the “salary taboo” in white-collar workplaces, including in the software industry. It can be professionally damaging, even leading to termination, to discuss compensation at many workplaces. But why? The conventional wisdom is that this allows companies to “cheat” workers into accepting lower salaries than they would accept if they had full knowledge of their companies’ compensation structures. That’s a small part of it, but it’s not the full story, and the main reason for the institution of secret salary has only a little bit to do with the 5 to 10 percent increase in payroll costs that companies might face if compensation were transparent. That small cost (which would probably be deducted from performance bonuses if companies were called to pay it) is actually trivial in comparison to the real reason companies insist on secret salaries: managerial mystique.

Information is power, and American-style management culture is obsessed with hiding information and demonstrating power. “Don’t ask me why I do what I do; I don’t need to answer to you.” Except when companies are cash-strapped, compensation isn’t really about money or “the budget”. To a much larger extent, it’s about ego. This is doubly true of the compensation table itself, “sensitive information” that must be guarded at all costs, as if it were a sacred and phallic object whose exposure to profane eyes would render every man in the tribe infertile for 14 years. As for the money, managers aren’t especially terrified of the 5 to 10 percent increase in personnel costs that could occur if salaries were made transparent. Rather, they’re scared of the discussions that would ensue. Suddenly, subordinates would feel entitled to argue, more openly, about whether it’s fair for Bob to be paid so much while Alice is paid so little. The workplace would begin to feel like a democracy (zounds!) in which subordinates are entitled to hold opinions on “sensitive” personnel matters, rather than a father-knows-best “benevolent dictatorship” where each person is “taken care of” in isolation. That is the real reason why transparent compensation “cannot be allowed at the present time”.

I do not mean to attribute such motivations to managers as individuals. Most “bosses” are not power-obsessed tribal strongmen, but good people trying to do a difficult job and sometimes taking tips from a (very defective) culture when they’re not sure what the right decision is. Some, like Jason Fried of 37signals, even have the courage to acknowledge openly the brokenness of American management culture. So I don’t want to lead people to suddenly conclude that their otherwise decent bosses are power-hungry jerks only because they insist on secret compensation. It’s not that simple. I do think it’s important, however, to acknowledge that the real reason white-collar management culture, as a systemic whole, insists on keeping compensation secret. It has very little to do with keeping personnel costs low or offering unfair deals (although “lowball” offers are sometimes given). It’s about managerial mystique, and the power that access to “sensitive” information confers.

Stop writing about success and start writing about failure.

The internet, especially in the entrepreneurial blogosphere, seems to have an order of magnitude more people producing literature about success (i.e. “success crack”) than actually succeeding. Every day I see the peddlers of reckless optimism swarming Hacker News with dangerous naivete about entrepreneurial pursuits. Don’t get me wrong: startups and self-employment are great, but people should be aware of the risks and pitfalls, which too many people gloss over. This isn’t limited to startups; every sector of business has its own cottage industry of how-to manuals for breakout success, usually chock full of unrealistic promises (which, I guess, lends credence to their authors being successful in business, such promises being the currency of their world). Is this “fake it till you make it” syndrome on the part of the “crack” peddlers? I’m not so cynical as to suspect that. Blog posts and books about “success” are written by well-intended people who want to share insights they’ve had about life and about their own successes. The problem is that their insights are rarely very deep. You have to lose in order to get a sense of what’s really happening in the world.

I’ve designed games before, and when I play-test, I strongly prefer to lose. Why? No one likes losing, not even me (and I play so many board and card games that I ought to be used to it). When I lose at my own game, it’s even more harsh. But it’s the (very mild) embarrassment and emotional unpleasantness associated with losing that gives me insight into design flaws that I would overlook as a happy-go-lucky, full-of-himself winner. For a game designer, it’s a blessing to lose. If I feel like I deserved to lose, I know the rules that contributed to my loss are good. If I lose because of a bad rule, it’s a double-loss (I lost the game, and my rule sucks) but the “bug” in my design gets squashed before the next release. Every game is fun for the winning player, but if it’s not fun for the losing player– or if not “fun”, at least enticing enough to make her want to improve her skill and become better– it’s a mediocre game. Thus, losing is a boon for the insight it provides to a game’s designer. It’s the only way the designer can develop certain insights into the character of the game. The same’s true of life.

A very small set of people, just on account of the planet’s immense size, have untarnished track records of success and never develop the desire to look deeper into the processes that led to their outcomes. With six billion people in the world, the existence of champion coin-flippers is a statistical guarantee. Many people desperately want to be like those perennial winners, which is why such peoples’ optimism is so appealing, even inspiring, to others. Far more people than that (in fact, almost everyone, including most of the “success crack” vendors) get a fair mix of failures and successes. The problem is that failures are embarrassing and under-reported, while the successes are magnified to outlandish proportions. In 1999, at the height of the technology boom when “everyone” was getting rich, how many technology companies actually did IPOs? 5,000? 100,000? Two in every garage? Nope. A few hundred, with the precise number varying depending on how one defines “technology company” but uncontroversially under 1,000. Also, many recent years had less than 100.

Success crack is harmful, because it leads both to ill-considered efforts and too-early discouragement. It makes success look easy, but not in the conventional sense, because aside from Tim Ferriss, few of its peddlers actually argue that their success comes without hard work. The problem with success crack is that it seems to believe “work hard” and “work smart” are enough, as if being intelligent and putting in 10 hours a day, six days per week, suffice to lead to break-out success. That’s not true. People have to prepare for adversity, uncertainty, discouragement, and a high likelihood that, even if they do everything right, they’ll fail. These dangers are virtually guaranteed to a person undertaking anything interesting. That’s not a pleasant thing to hear, but it’s reality.

One observation I’ll make is that, when the locus of control is internal, one generally learns more from one’s successes than from one’s failures. An example is music practice: playing an instrument incorrectly is damaging to one’s long-term performance, because it reinforces bad habits. Playing it correctly, and experiencing the “click” when it sounds perfect, is when learning occurs. The same pattern I noticed in high school with contest math (e.g. AMC, USAMO): I would learn more when I solved a problem, even if I couldn’t solve it within the alloted time or made a mistake and got the wrong answer, than when I failed to solve it and had to read the solution. In these arenas, success teaches more than failure, and consequently, the best thing one can do when one wants to become better at the craft is to find the most successful people and learn from them.

When the locus of control is largely external, such as in most workplaces where one’s success or failure is largely a function of how one’s work and ability are perceived, not what they actually are, the opposite becomes true. More is learned through failure than through success, as those who succeed rarely peer into a system’s dysfunctions and discover the pitfalls. This is why, whenever I take a new job, I always make sure to quietly befriend the person at the bottom of the social hierarchy. This is only mildly motivated by an altruistic sense of wanting to help the omega pup, and there’s a purely selfish reason for my doing this: in terms of office politics, he actually knows what’s going on. And if you befriend him, he’ll tell you. His report may be biased and bitter, if not unduly negative, but it’s also the most insightful and, if not always accurate, the most precise. Apply appropriate filters, but listen to what he has to say. In any organization, the least popular person is the most knowledgeable about its character. Learn from him.

Most of the notions of achievement we develop in childhood come from a time in which success is largely objective and one’s success is derived from an internal locus of control: music practice, athletics, contest math. Even in schools, with their often-decried (and greatly exaggerated, since even high schools are utopian compared to the average corporate workplace) emphasis on obedience at the expense of creativity, a student’s success is primarily a function of intellectual talent and his or her work ethic. Deadlines are well-tested, people are working on similar projects, and people in authority (e.g. teachers) are required to grade fairly and can lose their jobs if they don’t. There are, of course, some students who get bad grades because they run afoul of professors’ idiosyncratic prejudices but, by far, the most common cause of bad grades (I say this having earned a few, and having deserved almost all of them) is mediocre work. Therefore, in childhood and adolescence, learning from the most successful is an excellent strategy in order to become better.

However, the “real world” is far more interdependent and capricious, and it’s nearly impossible to succeed without convincing others that one deserves the resources necessary to try– and competence and the ability to sell oneself to others rarely occur in the same person. This is what no one wants to tell bright-eyed college students: that they’re about to enter a world where their success is likely to depend, in a serious way, on being given resources and opportunity by others, and that working hard and being smart are only marginally important. In fact, at the overkill levels seen in the best students at elite schools, intelligence and a strong work ethic can easily become social liabilities. Because the locus of control is so often external in the “real world”, peoples’ failures have more to teach us than their successes.

The fundamental problem with success crack is that, while it makes for engaging light reading, it’s written either by those who know the least about the world, having never been down in the muck, or (more commonly) by those who have suffered, but still wish to mimic the wide-eyed optimism of those who haven’t– it’s somewhat of a status symbol to believe, as an extremely fortunate person might, the world to be better than it actually is– and therefore censor out the unpleasant but important details they know but have expelled from consideration. Thus, we have a slew of terrible advice floating about such as “Do what you love, and the money will follow” that is accepted because it seems appealing in spite of being utterly untrue. Like some charismatic religions, it sounds pretty, and half of that is because the author is trying to convince him- or herself that he or she actually believes it: that desire to believe the incredible and wrong is a powerful force capable of motivating some of the world’s beautiful, but also utterly untrue, prose.

In this light, I hope to see more attention given to pitfalls and patterns to avoid, so that true learning may occur, and far less in the way of vague, feel-good directives about “success”.