Object Disoriented Programming

It is my belief that what is now called “object-oriented programming” (OOP) is going to go down in history as one of the worst programming fads of all time, one that has wrecked countless codebases and burned millions of hours of engineering time worldwide. Though a superficially appealing concept– programming in a manner that is comprehensible the “big picture” level– it fails to deliver on this promise, it usually fails to improve engineer productivity, and it often leads to unmaintainable, ugly, and even nonsensical code.

I’m not going to claim that object-oriented programming is never useful. There would be two problems with such a claim. First, OOP means many different things to different people– there’s a parsec of daylight between Smalltalk’s approach to OOP and the abysmal horrors currently seen in Java. Second, there are many niches within programming with which I’m unfamiliar and it would be arrogant to claim that OOP is useless in all of them. Almost certainly, there are problems for which the object-oriented approach is one of the more useful. Instead, I’ll make a weaker claim but with full confidence: as the default means of abstraction, as in C++ and Java, object orientation is a disastrous choice.

What’s wrong with it?

The first problem with object-oriented programming is mutable state. Although I’m a major proponent of functional programming, I don’t intend to imply that mutable state is uniformly bad. On the contrary, it’s often good. There are a not-small number of programming scenarios where mutable state is the best available abstraction. But it needs to be handled with extreme caution, because it makes code far more difficult to reason about than if it is purely functional. A well-designed and practical language generally will allow mutable state, but encourages it to be segregated only into places where it is necessary. A supreme example of this is Haskell, where any function with side effects reflects the fact in its type signature. On the contrary, modern OOP encourages the promiscuous distribution of mutable state, to such a degree that difficult-to-reason-about programs are not the exceptional rarity but the norm. Eventually, the code becomes outright incomprehensible– to paraphrase Boromir, “one does not simply read the source code”– and even good programmers (unknowingly) damage the codebase as they modify it, adding complexity without full comprehension. These programs fall into an understood-by-no-one state of limbo and become nearly impossible to debug or analyze: the execution state of a program might live in thousands of different objects!

Object-oriented programming’s second failing is that it encourages spaghetti code. For an example, let’s say that I’m implementing the card game, Hearts. To represent cards in the deck, I create a Card object, with two attributes: rank and suit, both of some sort of discrete type (integer, enumeration). This is a struct in C or a record in Ocaml or a data object in Java. So far, no foul. I’ve represented a card exactly how it should be represented. Later on, to represent each player’s hand, I have a Hand object that is essentially a wrapper around an array of cards, and a Deck object that contains the cards before they are dealt. Nothing too perverted here.

In Hearts, the person with the 2 of clubs leads first, so I might want to determine in whose hand that card is. Ooh! A “clever” optimization draws near! Obviously it is inefficient to check each Hand for the 2 of clubs. So I add a field, hand, to each Card that is set when the card enters or leaves a player’s Hand. This means that every time a Card moves (from one Hand to another, into or out of a Hand) I have to touch the pointer– I’ve just introduced more room for bugs. This field’s type is a Hand pointer (Hand* in C++, just Hand in Java). Since the Card might not be in a Hand, it can be null sometimes, and one has to check for nullness whenever using this field as well. So far, so bad. Notice the circular relationship I’ve now created between the Card and Hand classes.

It gets worse. Later, I add a picture attribute to the Card class, so that each Card is coupled with the name of an image file representing its on-screen appearance, and ten or twelve various methods for the number of ways I might wish to display a Card. Moreover, it becomes clear that my specification regarding a Card’s location in the game (either in a Hand or not in a Hand) was too weak. If a Card is not in a Hand, it might also be on the table (just played to a trick), in the deck, or out of the round (having been played). So I rename the hand attribute, place, and change its type to Location, from which Hand and Deck and PlaceOnTable all inherit.

This is ugly, and getting incomprehensible quickly. Consider the reaction of someone who has to maintain this code in the future. What the hell is a Location? From its name, it could be (a) a geographical location, (b) a position in a file, (c) the unique ID of a record in a database, (d) an IP address or port number or, what it actually is, (e) the Card’s location in the game. From the maintainer’s point of view, really getting to the bottom of Location requires understanding Hand, Deck, and PlaceOnTable, which may reside in different files, modules, or even directories. It’s just a mess. Worse yet, in such code the “broken window” behavior starts to set in. Now that the code is bad, those who have to modify it are tempted to do so in the easiest (but often kludgey) way. Kludges multiply and, before long, what should have been a two-field immutable record (Card) has 23 attributes and no one remembers what they all do.

To finish this example, let’s assume that the computer player for this Hearts game contains some very complicated AI, and I’m investigating a bug in the decision-making algorithms. To do this, I need to be able to generate game states as I desire as test cases. Constructing a game state requires that I construct Cards. If Card were left as it should be– a two-field record type– this would be a very easy thing to do. Unfortunately, Card now has so many fields, and it’s not clear which can be omitted or given “mock” values, that constructing one intelligently is no longer possible. Will failing to populate the seemingly irrelevant attributes (like picture, which is presumably connected to graphics and not the internal logic of the game) compromise the validity of my test cases? Hell if I know. At this point, reading, modifying, and testing code becomes more about guesswork than anything sound or principled.

Clearly, this is a contrived example, and I can imagine the defenders of object-oriented programming responding with the counterargument, “But I would never write code that way! I’d design the program intelligently in advance.” To that I say: right, for a small project like a Hearts game; wrong, for real-world, complex software developed in the professional world. What I described  is certainly not how a single intelligent programmer would code a card game; it is indicative of how software tends to evolve in the real world, with multiple developers involved. Hearts, of course, is a closed system: a game with well-defined rules that isn’t going to change much in the next 6 months. It’s therefore possible to design a Hearts program intelligently from the start and avoid the object-oriented pitfalls I intentionally fell into in this example. But for most real-world software, requirements change and the code is often touched by a number of people with widely varying levels of competence, some of whom barely know what they’re doing, if at all. The morass I described is what object-oriented code devolves into as the number of lines of code and, more importantly, the number of hands, increases. It’s virtually inevitable.

One note about this is that object-oriented programming tends to be top-down, with types being subtypes of Object. What this means is that data is often vaguely defined, semantically speaking. Did you know that the integer 5 doubles as a DessertToppingFactoryImpl? I sure didn’t. An alternative and usually superior mode of specification is bottom-up, as seen in languages like Ocaml and Haskell. These languages offer simple base types and encourage the user to build more complex types from them. If you’re unsure what a Person is, you can read the code and discover that it has a name field, which is a string, and a birthday field, which is a Date. If you’re unsure what a Date is, you can read the code and discover that it’s a record of three integers, labelled year, month, and day. If you want to get “to the bottom” of a datatype or function when types are built from the bottom-up, you can do so, and it rarely involves pinging across so many (possibly semi-irrelevant) abstractions and files as to shatter one’s “flow”. Circular dependencies are very rare in bottom-up languages. Recursion, in languages like ML, can exist both in datatypes and functions, but it’s hard to cross modules with it or create such obscene indirection as to make comprehension enormously difficult. By contrast, it’s not uncommon to find circular dependencies in object-oriented code. In the atrocious example I gave above, Hand depends on Card depends on Location inherits from Hand.

Why does OOP devolve?

Above, I described the consequences of undisciplined object-oriented programming. In limited doses, object-oriented programming is not so terrible. Neither, for that matter, is the much hated “goto” statement. Both of these are tolerable when used in extremely disciplined ways with reasonable and self-evident intentions. Yet when used by any but the most disciplined programmers, OOP devolves into a mess. This is hilarious in the context of OOP’s original promise to business types in the 1990s– that it wound enable mediocre programmers to be productive. What it actually did is create a coding environment in which mediocre programmers (and rushed or indisposed good ones) are negatively productive. It’s true that terrible code is possible in any language or programming paradigm; what makes object orientation such a terrible default abstraction is that, as with unstructured programming, bad code is an asymptotic inevitability as an object-oriented program grows. In order to discuss why this occurs, it’s necessary to discuss object orientation from a more academic perspective, and pose a question to which thousands of answers have been given.

What’s an object? 

On first approximation, one can think of an object as something that receives messages and performs actions, which usually include returning data to the sender of the message. Unlike a pure function, the response to each message is allowed to vary. In fact, it’s often required to do so. The object often contains state that is (by design) not directly accessible, but only observable by sending messages to the object. In this light, the object can be compared to a remote-procedure call (RPC) server. Its innards are hidden, possibly inaccessible, and this is generally a good thing in the context of, for example, a web service. When I connect to a website, I don’t care in the least about the state of its thread pooling mechanisms. I don’t want to know about that stuff, and I shouldn’t be allowed access to it. Nor do I care what sorting algorithm an email client uses to sort my email, as long as I get the right results. On the other hand, in the context of code for which one is (or, at least, might be in the future) responsible for comprehending the internals, such incomplete comprehension is a very bad thing.

To “What is an object?” the answer I would give is that one should think of it as a miniature RPC server. It’s not actually remote, nor as complex internally as a real RPC or web server, but it can be thought of this way in terms of its (intentional) opacity. This shines light on whether object-oriented programming is “bad”, and the question of when to use objects. Are RPC servers invariably bad? Of course not. On the other hand, would anyone in his right mind code Hearts in such a way that each Card were its own RPC server? No. That would be insane. If people treated object topologies with the same care as network topologies, a lot of horrible things that have been done to code in the name of OOP might never have occurred.

Alan Kay, the inventor of Smalltalk and the original conception of “object-oriented programming”, has argued that the failure of what passes for OOP in modern software is that objects are too small and that there are too many of them. Originally, object-oriented programming was intended to involve large objects that encapsulated state behind interfaces that were easier to understand than the potentially complicated implementations. In that context, OOP as originally defined is quite powerful and good; even non-OOP languages have adopted that virtue (also known as encapsulation) in the form of modules.

Still, the RPC-server metaphor for “What is an Object?” is not quite right, and the philosophical notion of “object” is deeper. An object, in software engineering, should be seen as a thing which the user is allowed to have (and often supposed to have) incomplete knowledge. Incomplete knowledge isn’t always a bad thing at all; often, it’s an outright necessity due to the complexity of the system. For example, SQL is a language in which the user specifies an ad-hoc query to be run against a database with no indication of what algorithm to use; the database system figures that out. For this particular application, incomplete knowledge is beneficial; it would be ridiculous to burden everyone who wants to use a database with the immense complexity of its internals.

Object-orientation is the programming paradigm based on incomplete knowledge. Its purpose is to enable computation with data of which the details are not fully known. In a way, this is concordant with the English use of the word “object” as a synonym for “thing”: it’s an item of which one’s knowledge is incomplete. “What is that thing on the table?” “I don’t know, some weird object.” Object-oriented programming is designed to allow people to work with things they don’t fully understand, and even modify them in spite of incomplete comprehension of it. Sometimes that’s useful or necessary, because complete knowledge of a complex program can be humanly impossible. Unfortunately, over time the over-tolerance of incomplete knowledge leads to an environment where important components can elude the knowledge of each individual responsible for creating them; the knowledge is strewn about many minds haphazardly.

Modularity

Probably the most important predictor of whether a codebase will remain comprehensible as it becomes large is whether it’s modular. Are the components individually comprehensible, or do they form an irreducibly complex tangle of which it is required to understand all of it (which may not even be possible) before one can understand any of it? In the latter case, software quality grinds to a halt, or even backslides, as the size of the codebase increases. In terms of modularity, the object oriented paradigm generally performs poorly, facilitating the haphazard growth of codebases in which answering simple questions like “How do I create and use a Foo object?” can require days-long forensic capers.

The truth about “power”

Often, people describe programming techniques and tools as “powerful”, and that’s taken to be an endorsement. A counterintuitive and dirty secret about software engineering “power” is not always a good thing. For a “hacker”– a person writing “one-off” code that is unlikely to ever be require future reading by anyone, including the author– all powerful abstractions, because they save time, can be considered good. However, in the more general software engineering context, where any code written is likely to require maintenance and future comprehension, power can be bad. For example, macros in languages like C and Lisp are immensely powerful. Yet it’s obnoxiously easy to write incomprehensible code using these features.

Objects are, likewise, immensely powerful (or “heavyweight”) beasts when features like inheritance, dynamic method dispatch, open recursion, et cetera are considered. If nothing else, one notes that objects can do anything that pure functions can do– and more. The notion of “object” is both a very powerful and a very vague abstraction.

“Hackers” like power, in which case a language can be judged based on the power of the abstractions it offers. But real-world software engineers spend an unpleasantly large amount of time reading and maintaining others’ code.  From an engineer’s perspective, a language is good based on what it prevents other programmers from doing to us, those of us who have to maintain their code in the future. In this light, the unrestrained use of Lisp macros and object-oriented programming is bad, bad, bad. From this perspective, a language like Ocaml or Haskell– of middling power but beautifully designed to encourage deployment of the right abstractions– is far better than a more “powerful” one like Ruby.

As an aside, a deep problem in programming language design is that far too many languages are designed with the interests of code writers foremost in mind. And it’s quite enjoyable, from a writer’s perspective, to use esoteric metaprogramming features and complex object patterns. Yet very few languages are designed to provide a beautiful experience for readers of code. In my experience, ML does this best, and Haskell does it well, while most of the mainstream languages fall short of being even satisfactory. In most real-world software environments, reading code is so unpleasant that it hardly gets done at all with any detail. Object-oriented programming, and the haphazard monstrosities its “powerful” abstractions enable, is a major culprit.

Solution?

The truth, I think, about object-oriented programming is that most of its core concepts– inheritance, easy extensibility of data, proliferation of state– should be treated with the same caution and respect that a humble and intelligent programmer gives to mutable state. These abstractions can be powerful and work beautifully in the right circumstances, but they should be used very sparingly and only by people who understand what they are actually doing. In Lisp, it is generally held to be good practice never to write a macro when a function will do. I would argue the same with regard to object-oriented programming: never write an object-oriented program for a problem where a functional or cleanly imperative approach will suffice.  Certainly, to make object orientation the default means of abstraction, as C++ and Java have, is a proven disaster.

Abstractions, and especially powerful ones, aren’t always good. Using the right abstractions is of utmost importance. Abstractions that are too vague, for example, merely clutter code with useless nouns. As the first means of abstraction in high-level languages, higher-order functions suffice most of the time– probably over 95% of the time, in well-factored code. Objects may come into favor for being more general than higher-order functions, but “more general” also means less specific, and for the purpose of code comprehension, this is a hindrance, not a feature. If cleaner and more comprehensible pure functions and algebraic data types can be used in well over 90 percent of the places where objects appear in OO languages, they should be used in lieu of objects, and they should be supported by languages– which C++ and Java don’t do.

In a better world, programmers would be required to learn how to use functions before progressing to objects, and object-oriented features would hold the status of being available but deployed only when needed, or in the rare cases where such features make remarkably better code. To start, this change needs to come about at a language level. Instead of Java or C++ being the first languages to which most programmers are introduced, that status should be shifted to a language like Scheme, Haskell, or my personal favorite for this purpose: Ocaml.

Why isn’t the U.S. innovating? Some answers.

This post is in direct response to this thread on Hacker News, focused on the question: why isn’t the U.S. building great new things, as much as it used to? There are a number of reasons. I’ll examine a few of them and, in the interest of keeping the discussion short, I’m going to analyze a few of the less-cited ones. The influence of the short-sighted business mentality, political corruption, and psychological risk-aversion on this country’s meager showing in innovation over the past 40 years are well-understood, so I’m going to focus on some of the less well-announced problems.

1. Transport as microcosm

For a case study in national failure, consider human transportation in the United States since 1960. It’s shameful: no progress at all. We’ve become great at sending terabits of data around the globe, and we’re not bad at freight transportation, but we’re awful when it comes to moving people. Our trains are laughable to the extent that we consider premium a level of speed (Acela, at 120 miles per hour) that Europeans just call “trains”. Worse yet, for a family of four, air and rail travel are actually more expensive per mile than the abominably inefficient automobile. As a country, we should be massively embarrassed by the state of human transportation.

Human transportation in the U.S. has an air of having given up. We haven’t progressed– in speed or service or price– since the 1960s. The most common way of getting to work is still a means (automotive) that scales horribly (traffic jams, “rush hour”) and we still use airplanes (instead of high-speed trains) for mid-distance travel, a decision that made some sense in the context of the Cold War but is wasteful and idiotic now. This isn’t just unpleasant and expensive, but also dangerous, in light of the environmental effects of greenhouse gases.

Why so stagnant? The problem is that we have, for the most part, given up on “hard” problems. By “hard”, I don’t mean “difficult” so much as “physical”. As a nation, we’ve become symbolic manipulators, often involved in deeply self- and mutually-referential work, who avoid interacting with physical reality as much as we can. Abstraction has been immensely useful, especially in computing, but it has also led us away from critically important physical “grunt” work to the point where a lot of people never do it.

I don’t mean to imply that no one does that kind of work in the United States. A fair number of people do, but the classes of people who manage large companies have, in almost all cases, never worked in a job that required physical labor rather than simply directing others in what to do. So to them, and to many of us as offices replace factories, the physical world is a deeply scary place that doesn’t play on our terms.

2. Losing the “rest of the best”.

One doesn’t have to look far to find complaints by vocal scientists, researchers, and academics that the best students are being “poached” by Wall Street and large-firm law (“biglaw”) instead of going into science and technology. One nuance that must be attached to that complaint: it’s not true. At least, not as commonly voiced.

The “best of the best” (99.5th percentile and up) still overwhelmingly prefer research and technology over finance. Although very few research jobs match the compensation available to even mediocre performers in finance, the work is a lot more rewarding. Banking is all about making enough money by age 40 never to have to work again; a job with high autonomy (as in research) makes work enjoyable. Moreover, banking and biglaw require a certain conformity that makes a 99.5th-percentile intellect a serious liability. That top investment bankers seem outright stupid from a certain vantage point does not make them easy competition; they are more difficult competition because of their intellectual limitations. So, for these reasons and many more, the best of the best are still becoming professors, technologists, and if sufficiently entrepreneurial, startup founders.

What is changing is that the “rest of the best” have been losing interest in science and research. The decline of scientific and academic job markets has been mild for the best-of-the-best, who are still able to find middle-class jobs and merely have fewer choices, but catastrophic for the rest-of-the-best. When the decision is to be made between a miserable adjunct professorship at an uninspiring university, versus a decent shot at a seven-figure income in finance, the choice becomes obvious.

America loves winner-take-all competitions, so outsized rewards for A players, to the detriment of B players, seems like something the American society ought to considered just and valuable. The problem is that this doesn’t work for the sciences and technology. First, the “idea people” need a lot of support in order to bring their concepts to fruition. The A players are generally poor at selling their vision and communicating why their ideas are useful (i.e. why they should be paid for something that doesn’t look like work) and the B players have better options than becoming second-rate scientists, given how pathetic the scientific and academic careers now are for non-”rock stars”. What is actually happening with regard to the talent spectrum is the emergence of a bimodal distribution. With the filtering out of the B players, academia is becoming a two-class industry split between A and C players because the second-tier jobs are not compelling enough to attract the B players. This two-class dynamic is never good for an industry. In fact, it’s viciously counterproductive because the C players are often so incompetent that their contributions are (factoring in morale costs) negative.

This two-class phenomenon has already happened in computer programming, to distinctly negative effects that are responsible for the generally low quality of software. What I’ve observed is that there are very few middling programmers. The great programmers take  jobs in elite technology companies or found startups. The bad programmers work on uninspiring projects in the bowels of corporate nowhere– back-office work in banks, boring enterprise software, et al. There isn’t much interaction between the two tiers– virtually two separate industries– and with this lack of cross-pollination, the bad programmers don’t get much encouragement to get better. As designing decent, usable software is very challenging for the great programmers, one can imagine what’s created when bad programmers do it.

In reality, the B players are quite important for a variety of reasons. First is that this categorization is far from static and B players often turn into A players as they mature. (This is necessary in order to replace the A players who become lazy after getting tenure in academia, or reaching some comparable platform of comfort in other industries.) Second is that B players are likely to become A players in other domains, later– as politicians and business executives– and it’s far better to have people in those positions of power who are scientifically literate. Third is that a lot of the work in science and technology isn’t glamorous and doesn’t require genius, but does require enough insight and competence as to require at least a B (not C or lower) player. If B players aren’t adequately compensated for this work and therefore can’t be hired into it, such tasks either get passed to A players (taking up time that could be used on more challenging work) or to C players (who do such a poor job that more competent peoples’ time must be employed, in any case, to check and fix their work).

Science, research, and academia are now careers that one should only enter if one has supreme confidence of acquiring “A player” status, because the outcomes for anyone else are abysmal. In the long term, that makes the scientific and research community less friendly to people who may not be technically superior but will benefit the sciences indirectly by enabling cross-linkages between science and the rest of society. The result of this is a slow decline in the status of society and technology as time passes.

3. No one takes out the trash. 

Software companies find that, if they don’t manage their code by removing or fixing low-quality code, they become crippled later by “legacy” code and technical decisions that were reasonable at one time, but proved counterproductive later on. This isn’t only a problem with software, but with societies in general. Bad laws are hard to unwrite, and detrimental interest groups are difficult to refuse once they establish a presence.

Healthcare reform is a critical example of this. President Obama found fixing the murderously broken, private-insurance-based healthcare system to be politically unfeasible due to entrenched political dysfunction. This sort of corruption can be framed as a morality debate; but from a functional perspective, it manifests not as a subjective matter of “bad people” but more objectively as a network of inappropriate relationships and perverse dependencies. In this case, I refer to the interaction between private health insurance companies (which profit immensely from a horrible system) and political officials (who are given incentives not to change it, through the campaign-finance system).

Garbage collection in American society is not going to be easy. Too many people are situated in positions that benefit from the dysfunction– like urban cockroaches, creatures that thrive in damaged environments– and the country now has an upper class defined by parasitism and corruption rather than leadership. Coercively healing society will likely lead to intense (and possibly violent) retribution from those who currently benefit from its failure and who will perceive themselves as deprived if it is ever fixed.

What does this have to do with innovation? Simply put, if society is full of garbage– inappropriate relationships that hamper good decision-making, broken and antiquated policies and regulations, institutions that don’t work, the wrong people in positions of power– then an innovator is forced to negotiate an obstacle course of idiocy in order to get anything done. There just isn’t room if the garbage is let to stay. Moreover, since innovations often endanger people in power, there are some who fight actively to keep the trash in place, or even to make more of it.

4. M&A has replaced R&D.

A person who wants the autonomy, risk-tolerance, and upside potential (in terms of contribution, if not remuneration) of an R&D job is unlikely to find it in the 21st-century, with the practical death of blue-sky research. Few of those jobs exist, many who have them stay “emeritus” forever instead of having the decency to retire and free up positions, and getting one without a PhD from a top-5 university (if not a post-doc) is virtually unheard-of today. Gordon Gekko and the next-quarter mentality have won. These high-autonomy R&D jobs only exist in the context of a marketing expense– a company hiring for a famous researcher for the benefit of saying that he or she works there. Where has the rest of R&D gone? Startups. Instead of funding R&D, large companies are now buying startups, letting the innovation occur on someone else’s risk.

There is some good in this. A great idea can turn into a full-fledged company instead of being mothballed because it cannibalizes something else in the client company’s product portfolio. There is also, especially from the perspective of compensation, a lot more upside in being an entrepreneur than a salaried employee at an R&D lab. All of this said, there are some considerable flaws in this arrangement. First is that a lot of research– projects that might take 5 to 10 years to produce a profitable result– will never be done under this model. Second is that getting funding, for a startup, generally has more to do with inherited social connections, charisma, and a perception of safety in investment, than with the quality of the idea. This is an intractable trait of the startup game because “the idea” is likely to be reinvented between first funding and becoming a full-fledged business. The result of this is that far too many “Me, too” startups and gimmicks get funded and too little innovation exists. Third and most severe is what happens upon failure. When an initiative in an R&D lab fails, the knowledge acquired from this remains within the company. The parts that worked can be salvaged, and what didn’t work is remembered and mistakes are less likely to be repeated. With startups, the business ceases to exist outright and its people dissipate. The individual knowledge merely scatters, but the institutional knowledge effectively ceases to exist.

For the record, I think startups are great and that anything that makes it easier to start a new company should be encouraged. I even find it hard to hate “acqu-hiring” if only because, for the practice’s well-studied flaws, it creates a decent market for late-stage startup companies. All that said, startups are generally a replacement for most R&D. They were never meant to replace in-house innovation.

5. Solutions?

The problems the U.S. faces are well-known, but can this be fixed? Can the U.S. become an innovative powerhouse again? It’s certainly possible, but in the current economic and political environment, the outlook is very poor. Over the past 40 years, we’ve been gently declining rather than crashing and, to the good, I believe we’ll continue doing so, rather than collapsing. Given the dead weight of political conservatism, an entrenched and useless upper class, and a variety of problems with attitudes and the culture, our best realistic hope is slow, relative decline and absolute improvement– that as the world becomes more innovative, so will the U.S. The reason I consider this “improvement-but-relative-decline” both realistic and the best possibility is that a force (such as a world-changing technological innovation) that heals the world can also reverse American decline, but one less powerful than a world-healing force cannot save the U.S. from calamity. It would not be such a terrible outcome. American “decline” is a foregone conclusion– and it’s neither right nor sustainable for so few people to have so much control– but just as the average Briton is better off now than in 1900, and arguably better off than the average American now, this need not be a bad thing.

Clearing out the garbage that retards American innovation is probably not politically or economically feasible. I don’t see it being done in a proactive way; I see the garbage disappearing, if it does, through rot and disintegration rather than aggressive cleanup. But I think it’s valuable, for the future, to understand what went wrong in order to give the next center of innovation a bit more longevity. I hope I’ve done my part in that.