Why isn’t the U.S. innovating? Some answers.

This post is in direct response to this thread on Hacker News, focused on the question: why isn’t the U.S. building great new things, as much as it used to? There are a number of reasons. I’ll examine a few of them and, in the interest of keeping the discussion short, I’m going to analyze a few of the less-cited ones. The influence of the short-sighted business mentality, political corruption, and psychological risk-aversion on this country’s meager showing in innovation over the past 40 years are well-understood, so I’m going to focus on some of the less well-announced problems.

1. Transport as microcosm

For a case study in national failure, consider human transportation in the United States since 1960. It’s shameful: no progress at all. We’ve become great at sending terabits of data around the globe, and we’re not bad at freight transportation, but we’re awful when it comes to moving people. Our trains are laughable to the extent that we consider premium a level of speed (Acela, at 120 miles per hour) that Europeans just call “trains”. Worse yet, for a family of four, air and rail travel are actually more expensive per mile than the abominably inefficient automobile. As a country, we should be massively embarrassed by the state of human transportation.

Human transportation in the U.S. has an air of having given up. We haven’t progressed– in speed or service or price– since the 1960s. The most common way of getting to work is still a means (automotive) that scales horribly (traffic jams, “rush hour”) and we still use airplanes (instead of high-speed trains) for mid-distance travel, a decision that made some sense in the context of the Cold War but is wasteful and idiotic now. This isn’t just unpleasant and expensive, but also dangerous, in light of the environmental effects of greenhouse gases.

Why so stagnant? The problem is that we have, for the most part, given up on “hard” problems. By “hard”, I don’t mean “difficult” so much as “physical”. As a nation, we’ve become symbolic manipulators, often involved in deeply self- and mutually-referential work, who avoid interacting with physical reality as much as we can. Abstraction has been immensely useful, especially in computing, but it has also led us away from critically important physical “grunt” work to the point where a lot of people never do it.

I don’t mean to imply that no one does that kind of work in the United States. A fair number of people do, but the classes of people who manage large companies have, in almost all cases, never worked in a job that required physical labor rather than simply directing others in what to do. So to them, and to many of us as offices replace factories, the physical world is a deeply scary place that doesn’t play on our terms.

2. Losing the “rest of the best”.

One doesn’t have to look far to find complaints by vocal scientists, researchers, and academics that the best students are being “poached” by Wall Street and large-firm law (“biglaw”) instead of going into science and technology. One nuance that must be attached to that complaint: it’s not true. At least, not as commonly voiced.

The “best of the best” (99.5th percentile and up) still overwhelmingly prefer research and technology over finance. Although very few research jobs match the compensation available to even mediocre performers in finance, the work is a lot more rewarding. Banking is all about making enough money by age 40 never to have to work again; a job with high autonomy (as in research) makes work enjoyable. Moreover, banking and biglaw require a certain conformity that makes a 99.5th-percentile intellect a serious liability. That top investment bankers seem outright stupid from a certain vantage point does not make them easy competition; they are more difficult competition because of their intellectual limitations. So, for these reasons and many more, the best of the best are still becoming professors, technologists, and if sufficiently entrepreneurial, startup founders.

What is changing is that the “rest of the best” have been losing interest in science and research. The decline of scientific and academic job markets has been mild for the best-of-the-best, who are still able to find middle-class jobs and merely have fewer choices, but catastrophic for the rest-of-the-best. When the decision is to be made between a miserable adjunct professorship at an uninspiring university, versus a decent shot at a seven-figure income in finance, the choice becomes obvious.

America loves winner-take-all competitions, so outsized rewards for A players, to the detriment of B players, seems like something the American society ought to considered just and valuable. The problem is that this doesn’t work for the sciences and technology. First, the “idea people” need a lot of support in order to bring their concepts to fruition. The A players are generally poor at selling their vision and communicating why their ideas are useful (i.e. why they should be paid for something that doesn’t look like work) and the B players have better options than becoming second-rate scientists, given how pathetic the scientific and academic careers now are for non-”rock stars”. What is actually happening with regard to the talent spectrum is the emergence of a bimodal distribution. With the filtering out of the B players, academia is becoming a two-class industry split between A and C players because the second-tier jobs are not compelling enough to attract the B players. This two-class dynamic is never good for an industry. In fact, it’s viciously counterproductive because the C players are often so incompetent that their contributions are (factoring in morale costs) negative.

This two-class phenomenon has already happened in computer programming, to distinctly negative effects that are responsible for the generally low quality of software. What I’ve observed is that there are very few middling programmers. The great programmers take  jobs in elite technology companies or found startups. The bad programmers work on uninspiring projects in the bowels of corporate nowhere– back-office work in banks, boring enterprise software, et al. There isn’t much interaction between the two tiers– virtually two separate industries– and with this lack of cross-pollination, the bad programmers don’t get much encouragement to get better. As designing decent, usable software is very challenging for the great programmers, one can imagine what’s created when bad programmers do it.

In reality, the B players are quite important for a variety of reasons. First is that this categorization is far from static and B players often turn into A players as they mature. (This is necessary in order to replace the A players who become lazy after getting tenure in academia, or reaching some comparable platform of comfort in other industries.) Second is that B players are likely to become A players in other domains, later– as politicians and business executives– and it’s far better to have people in those positions of power who are scientifically literate. Third is that a lot of the work in science and technology isn’t glamorous and doesn’t require genius, but does require enough insight and competence as to require at least a B (not C or lower) player. If B players aren’t adequately compensated for this work and therefore can’t be hired into it, such tasks either get passed to A players (taking up time that could be used on more challenging work) or to C players (who do such a poor job that more competent peoples’ time must be employed, in any case, to check and fix their work).

Science, research, and academia are now careers that one should only enter if one has supreme confidence of acquiring “A player” status, because the outcomes for anyone else are abysmal. In the long term, that makes the scientific and research community less friendly to people who may not be technically superior but will benefit the sciences indirectly by enabling cross-linkages between science and the rest of society. The result of this is a slow decline in the status of society and technology as time passes.

3. No one takes out the trash. 

Software companies find that, if they don’t manage their code by removing or fixing low-quality code, they become crippled later by “legacy” code and technical decisions that were reasonable at one time, but proved counterproductive later on. This isn’t only a problem with software, but with societies in general. Bad laws are hard to unwrite, and detrimental interest groups are difficult to refuse once they establish a presence.

Healthcare reform is a critical example of this. President Obama found fixing the murderously broken, private-insurance-based healthcare system to be politically unfeasible due to entrenched political dysfunction. This sort of corruption can be framed as a morality debate; but from a functional perspective, it manifests not as a subjective matter of “bad people” but more objectively as a network of inappropriate relationships and perverse dependencies. In this case, I refer to the interaction between private health insurance companies (which profit immensely from a horrible system) and political officials (who are given incentives not to change it, through the campaign-finance system).

Garbage collection in American society is not going to be easy. Too many people are situated in positions that benefit from the dysfunction– like urban cockroaches, creatures that thrive in damaged environments– and the country now has an upper class defined by parasitism and corruption rather than leadership. Coercively healing society will likely lead to intense (and possibly violent) retribution from those who currently benefit from its failure and who will perceive themselves as deprived if it is ever fixed.

What does this have to do with innovation? Simply put, if society is full of garbage– inappropriate relationships that hamper good decision-making, broken and antiquated policies and regulations, institutions that don’t work, the wrong people in positions of power– then an innovator is forced to negotiate an obstacle course of idiocy in order to get anything done. There just isn’t room if the garbage is let to stay. Moreover, since innovations often endanger people in power, there are some who fight actively to keep the trash in place, or even to make more of it.

4. M&A has replaced R&D.

A person who wants the autonomy, risk-tolerance, and upside potential (in terms of contribution, if not remuneration) of an R&D job is unlikely to find it in the 21st-century, with the practical death of blue-sky research. Few of those jobs exist, many who have them stay “emeritus” forever instead of having the decency to retire and free up positions, and getting one without a PhD from a top-5 university (if not a post-doc) is virtually unheard-of today. Gordon Gekko and the next-quarter mentality have won. These high-autonomy R&D jobs only exist in the context of a marketing expense– a company hiring for a famous researcher for the benefit of saying that he or she works there. Where has the rest of R&D gone? Startups. Instead of funding R&D, large companies are now buying startups, letting the innovation occur on someone else’s risk.

There is some good in this. A great idea can turn into a full-fledged company instead of being mothballed because it cannibalizes something else in the client company’s product portfolio. There is also, especially from the perspective of compensation, a lot more upside in being an entrepreneur than a salaried employee at an R&D lab. All of this said, there are some considerable flaws in this arrangement. First is that a lot of research– projects that might take 5 to 10 years to produce a profitable result– will never be done under this model. Second is that getting funding, for a startup, generally has more to do with inherited social connections, charisma, and a perception of safety in investment, than with the quality of the idea. This is an intractable trait of the startup game because “the idea” is likely to be reinvented between first funding and becoming a full-fledged business. The result of this is that far too many “Me, too” startups and gimmicks get funded and too little innovation exists. Third and most severe is what happens upon failure. When an initiative in an R&D lab fails, the knowledge acquired from this remains within the company. The parts that worked can be salvaged, and what didn’t work is remembered and mistakes are less likely to be repeated. With startups, the business ceases to exist outright and its people dissipate. The individual knowledge merely scatters, but the institutional knowledge effectively ceases to exist.

For the record, I think startups are great and that anything that makes it easier to start a new company should be encouraged. I even find it hard to hate “acqu-hiring” if only because, for the practice’s well-studied flaws, it creates a decent market for late-stage startup companies. All that said, startups are generally a replacement for most R&D. They were never meant to replace in-house innovation.

5. Solutions?

The problems the U.S. faces are well-known, but can this be fixed? Can the U.S. become an innovative powerhouse again? It’s certainly possible, but in the current economic and political environment, the outlook is very poor. Over the past 40 years, we’ve been gently declining rather than crashing and, to the good, I believe we’ll continue doing so, rather than collapsing. Given the dead weight of political conservatism, an entrenched and useless upper class, and a variety of problems with attitudes and the culture, our best realistic hope is slow, relative decline and absolute improvement– that as the world becomes more innovative, so will the U.S. The reason I consider this “improvement-but-relative-decline” both realistic and the best possibility is that a force (such as a world-changing technological innovation) that heals the world can also reverse American decline, but one less powerful than a world-healing force cannot save the U.S. from calamity. It would not be such a terrible outcome. American “decline” is a foregone conclusion– and it’s neither right nor sustainable for so few people to have so much control– but just as the average Briton is better off now than in 1900, and arguably better off than the average American now, this need not be a bad thing.

Clearing out the garbage that retards American innovation is probably not politically or economically feasible. I don’t see it being done in a proactive way; I see the garbage disappearing, if it does, through rot and disintegration rather than aggressive cleanup. But I think it’s valuable, for the future, to understand what went wrong in order to give the next center of innovation a bit more longevity. I hope I’ve done my part in that.