Wall Street to Socialism: a generation’s progress

Conventional wisdom holds that a social democratic overthrow of corporate capitalism is unlikely to happen in the United States. People are too happy, complacent, and conservative, this “wisdom” holds, to do it. They are wrong, and I will establish this by pulling evidence from the most unlikely source: a generation and nation that, thirty years ago, fell in love with Wall Street and the corporate rich with sufficient energy to bring about massive cultural changes and, eventually, the nation’s own ruin. I choose the financial industry simply because it represents the ambitions (base as they may have been) of a generation’s most energetic individuals– at a time when the United States had a different economy and presented individuals with radically different options and incentives. From this analysis, I hope to establish that the next crop of young people will throw equal energy into a completely antipodal cause: the overthrow of the corporate menace.

The Wall Street mania of the past three decades was driven by the alluring but remote possibility, for the rising generation, of becoming a titan like Michael Milken, Carl Icahn, or the fictional Gordon Gekko. Even films like Wall Street and American Psycho, which portrayed the industry quite negatively from any mature perspective, inadvertently made the industry attractive to young people with empty, materialistic ambitions. What was Wall Street selling? What did it offer that inspired thirty years’ worth of bright college students to toil for 80, 100, and sometimes 120 hours per week on boring work in distressing, hierarchical environments? Money is the common explanation, but that’s far from the best answer. What it offered, or at least pretended to offer, was economic security– the option, within time, to own one’s life rather than rent it from a boss who might one day close down the factory, or ship one’s job overseas. The Bud Foxes were the first to realize that “middle class America” still relied on the ruling class for survival, and may not exist in a few decades– and history seems to be proving them right.

Only raving egotists want to be enormously rich, buying new Ferraris every week and owning $10-million apartments. But everyone wants freedom, and the collapse of the professions (which exist to allow a class of people, although not enormous wealth, the ability to reliably sell labor at a decent wage; a privilege that a frenetic economy such as the modern global one can afford nobody) in the United States made it evident that economic freedom was becoming a rare commodity. Wall Street offered a path to economic freedom: a person could get very rich in banking, retire at 40, and work no more. No illusions were ever given about the work being rewarding, socially beneficial, or even especially interesting. At the entry level, banking jobs always were, with neither apology nor attempts to conceal the fact, notoriously unpleasant. Even in the comparatively cushy “Bateman days” of the 1980s, when expectations of mid-level bankers were still soft and reminiscent of Mad Men‘s culture, entry-level bankers were expected to work 90-hour weeks. But the promise associated with an investment banking “analyst” stint was that it would open doors, for ambitious and smart young people from the middle class, into the well-paid corporate and financial sinecures previously only available to the upper class.

The upper classes did not want to open any doors, but in the case of banking, they had to do so. What had been a lazy and somewhat boring upper-class “safety career”, available mainly as a means by which ruined WASP aristocrats could replenish lost fortunes, changed abruptly with the escalating demands of clients, as well as innovations in derivative securities, which required a fair deal of mathematical knowledge to comprehend. Suddenly, banking needed better– smarter and harder working– people than it could find if it restricted entry to the upper class. The industry lost its genteel indolence and suddenly had to hire “mercenaries” from lower social classes: people who could work for extremely long hours, often exceeding 100 per week, on unpleasant work with no complaints; and, for different purposes, people with sufficient mathematical talent to actually understand the new derivative securities.

The combination of these two traits is extremely rare, and the few who have both mathematical talent and herculean endurance usually have better options than investment banking, so the banks had to recruit on two fronts. To hire mathematical talent, it developed quantitative analysis (“quant”) divisions to foster research and development. To get workhorses, capable of bearing punishing hours and low autonomy for months at a clip, it created analyst programs that were available, with no training, immediately out of college. Analysts don’t actually “analyze” anything, but they take on the worst of the grunt work, and the analyst program’s real purpose has always been to separate those who can remain functional (you don’t need to be smart or socially skilled to be an investment banker, but you must have the rare talent of being not-terrible in both measures after a double-all-nighter) under severe duress from the majority who cannot. (Half of an analyst class burns out in the first 12 months, and only about 1 analyst in 10 becomes an associate.)

Quants were happy with the opportunity to make middling six-figure salaries in applied research, being paid three to five times what was available to them in academia, as long as their autonomy remained high (by corporate standards) and their work was interesting. Since they were mostly dismissed academics with PhDs, they’d never intended to be rich and the face-value offer given to them was enough. On the other hand, analysts (who were paid, on an hourly basis, worse than retail managers; and arguably overpaid at that, since they were useless until “proven” via the analyst crucible) needed an additional incentive to endure their miserable, 90-hour-per-week jobs. Banks provided it: the (rarely delivered-upon) promise of future economic freedom, either in the potential for enormous compensation (for those who stayed in banking) or in the availability of corporate sinecures (“exit opportunities” for those who left). Most analysts never achieved either of these goals, instead quietly failing and retreating from view, but the lottery had enough winners to create a sense of promise, especially in a culture that made the successes highly visible but kept the failures hidden (mostly because they, much like failed academics, grew too self-loathing to speak up about their blasted careers).

Academia and large-firm law (“biglaw”) pulled the same tricks, realizing that people who had been used to winning– getting top grades, being admitted to top undergraduate and graduate schools, and getting selective entry-level jobs– for their entire lives were utterly terrible at estimating their odds of future success, due to the 20 years of “beginner’s luck” (note: beginner’s luck actually does exist, because those with poor initial luck in gambling don’t “begin”, they quit) they had experienced. Those who had always beaten the odds and bent the rules would fail to ascertain that the rules now applied to them, and would therefore misapprehend a 10% chance (of tenure, biglaw partnership, MD status in banking) as an certainty for them; after all, they’d survived so many 1-in-10 cuts in the past.

This gets to the heart of so many things, but one among them is executive overcompensation in corporate America. Most people attribute the absurd and obvious undeserved overpayment of corporate CEOs to back-scratching and simple greed, and that’s a major part of it, but it’s also a mechanism for printing money in the form of “dreambucks”. If young people overestimate their chances of ascending to the parasitic ranks– each believing they have a 1% chance of becoming a CEO instead of 0.005%, and a 25% chance of attaining a middling executive rank rather than 0.2%– then this is an arbitrage-like opportunity to delude them about their future income should they continue a certain career path. It’s a very effective mechanism of promising future wealth that won’t be delivered because it doesn’t exist, but that also can’t be legally demanded because it was never formally promised.

The overconfidence of youth made sense in the context of the 1980s, with its focus on individual prosperity and enormity, but one thing that’s clear in 2011 is that the easy, sure paths to economic freedom (which were never as easy or sure as their hopemongers wanted to let on, but were much easier and surer even 10 years ago than they are now) are, for the foreseeable future, gone for good. The world of 2011 is not some hopeless post-apocalyptic hellhole in which it is impossible to have success, but it is one lacking in “obvious” paths to excessive individual prosperity. A few, as in any generation, will become inordinately wealthy on Wall Street or through VC-funded technology startups, but most won’t, and the odds are good nowhere. The middle class is learning, on the whole, that entry into the ruling class, even for the 140-IQers once sought as quants and the 110-hour workhorses once hired as analysts, is so remote a possibility that it shouldn’t even be considered, and that reviving America’s middle class prosperity is their only hope.

The goal of the ambitious Wall Streeter was to reach a level of wealth at which he’d no longer need to work. But work is a fact of life, and the accumulation of enormous wealth is (for most) an extremely inefficient (to say nothing of its parasitism and high rate of failure) means of achieving a decent life. Why not, instead, improve work? People will always have to work, on account of psychological if not economic needs, but there’s no good reason for working to be the miserable endeavor that it is for most people.

Corporate capitalism makes work unpleasant, and it does so quite intentionally. The low autonomy, punishing sacrifices, insulting conditions, artificial scarcity, miserable compensation, and menial tasks that average workers face exist for a reason: work, for those not of rank, has to be painful, in order to make people fight to “move up”. The system manages to motivate people using excessive and often unrealistic promises: one who keeps working hard will be able to buy himself out of the “common curse” (of needing to work) entirely. Yet it’s obvious that, for the vast majority, the million-dollar salaries, corporate sinecures, and tenured professorships never come. Those who do escape the need to work often find themselves so bored and isolated by their supposedly “free” lives that, although they re-enter it on far better terms than are allowed for common people, they readily rejoin the working world. The psychological need to work (not necessarily the need to hold a paid job, but the desire to contribute in a meaningful way to society) is so deep-seated the the vast majority of people will work (I include unpaid stay-at-home parents as working people; there is no good reason not to include them) no matter what sort of society we need. All of this firmly establishes that it’s better for all of us to improve work for all rather than to devolve into vicious battles fought by social-climbing, ambitious individuals seeking to escape it.

A society that implements libertarian socialism will need energetic, enthusiastic workers, but it will be forced to use the positive incentives (this is why total equality of results, which is not even possible, is undesirable) rather than the negative ones that currently characterize economic life in the United States. In a world where a basic income, universal healthcare, and appropriate education are guaranteed, there will still be janitors– people will pay others to do the work– but they will be treated with far more respect than they are now. Rather than being spat upon, they will be honored for their sacrifice of their time. When libertarian socialism finally overthrows corporate capitalism, the world ushered in will be so superior in function, beauty, and social justice to the one we have now that people will wonder how we possibly allowed such an archaic, brutal system to exist for so long. Perhaps surprisingly, I can answer that.

The term “crab mentality” is used in the Philippines to describe poor communities that work to prevent their most talented individuals from escaping or succeeding. This refers to a proverb pertaining to captured crabs in a bucket: an individual could easily escape, but when they are in a group, the other crabs hold back any who might get out. (I have no idea if this is actually true of real crustaceans.) I might take this concept a little further and say that what’s described here is a pulling crab mentality: the ambitious person is held back by his impoverished and envious neighbors. In the United States conservative/individualist mindset, we see a pushing crab mentality instead: the escaping person is so desperate to get out of his milieu (more likely, in the U.S., to be middle-class mediocrity rather than true poverty) that he beats the others down in order to rise. This reckless and socially harmful individualism explains why upwardly-mobile Americans accepted, rather than destroying, corporate rule for as long as they did; they’d rather ascend to rank in an evil system, something an individual can occasionally accomplish without assistance, than tear it down, something that a single person cannot do in the absence of a like-minded and supportive movement, but that the group can do easily if it is not divided against itself (as the upper class prefers, hence their willingness to use racism and religious bigotry to polarize this nation). The good news is that between a systemic and long-term collapse of the old-style corporate system and an increasingly global consciousness, this “pushing crab mentality” is on its way out. Given time, it will fade.

Wall Street filled its ranks using a promise of individual prosperity and delivered on it about 10 percent of the time, if that often, and it got a lot of energetic, talented people while doing so. Contrary to the laments of some Ivy League professors, investment banking actually never drew “the best and the brightest”, but it drew a fair portion of the “rest of the brightest”, and that’s often what counts. History is driven more by cultural forces than eminent individuals, and the most important sector of humanity (as a mover of culture, history or society) from this perspective has never been the most brilliant 0.1%, nor the average-to-below 85%, but the 14.9% in between those sets (i.e. the people likely, during the past three decades, to be drawn to Wall Street). If you want to predict the future of a large group of people, focus neither on the idiosyncratic luminaries (who are the same freethinking, brilliant, and mostly ignored people whether in the U.S., China, Finland, or sub-Saharan Africa) nor on the average people, but on the bulk of smart-to-very-smart people between those two segments.

To assert that the people who, in one generation, suited up and learned how to proofread pitchbooks will, in another, become socialist luminaries is obviously absurd. Since time is passing, it’s not going to be the same people. What remains true is that people, speaking in aggregates, do not have specific leanings but rather respond to incentives and their environments. The “yuppie” movement occurred not because a generation was intrinsically uncultured and materialistic, but because certain opportunities (ascent, thanks to a corporate upper class in need of technical talent it did not have among its own, for ambitious middle-class people to rise very high) existed in the 1970s and ’80s that have since disappeared. If the rising generation had faced the incentives and options available to the “yuppies”, it would likely grow up to resemble them; but it does not. From this point, what does it take to create a generation of energetic, progressive, and altruistic socialist revolutionaries? First, the corporate system must be in such a state of decline and closure that an enormous number of people cannot get distracted by decent jobs and middle-class comfort. Note: the Petty Depression of 2007-2015(?) is having this exact effect. Second, it requires an intellectual environment in which information flows freely and the best ideas can win and percolate to the surface. Note: The Internet-borne’s Second Enlightenment of rational liberalism is evidence of this happening. Rational liberalism has taken such a strong hold in geek culture that conservatives decry it as a “hive mind” on websites such as Reddit. Third, it requires the sense that positive change can be accomplished, and that the process will be peaceful (or at least peaceful enough that the costs are outweighed enormously by the benefits). Note: Tunisia, Egypt, and one might even include Wisconsin, are all encouraging examples.

We have the “perfect storm” necessary to bring forth a socialist generation in the United States. Now, on this, consider the following. Almost all of the work done in the middle and higher echelons of corporate America was purchased using offers entailing small chances at individual prosperity: a 10 or 20 percent chance that hard work will eventually be rewarded. To purchase professional lottery tickets, a monstrous amount of effort was expended. Much of this work was toward negative ends (as seen in the financial crash of 2008) but the amount of energy expended was, nonetheless, incredible. If it were harnessed toward the downfall, instead of the support, of corporate capitalism, society’s enemies would fall from power within weeks.

It goes without saying that most rational people would rather improve the lives of everyone than improve only their own. There are some pathological individuals who value relative superiority over absolute well-being, those who would rather reign in hell than serve in heaven, but they’ve never been a majority. (They are, sadly, the majority of those holding power in a corporate system so favorable to narcissists and psychopaths as to have turned its upper ranks into nests of such people, but that matter opens another discussion.) Treating such perverted individuals as marginal (and they would actually be marginal, were it not for power’s twin tendencies to attract the corrupt and to further corrupt those who have it) one can make the following assertion. People would much rather have an 80 percent chance of a free and prosperous life for all than a 10 percent chance of a prosperous life for only themselves.

This, of course, is why I expect to see a great deal of energy, talent, and energy thrown into the effort to overthrow corporate capitalism in favor of rationalistic, libertarian socialism. If people will expend a great deal of energy and creativity in, for example, the creation of abstruse derivative securities in order to nab a 1-in-10 shot at a better life for themselves, just imagine what they’ll do for an 8-in-10 shot at a better life for everyone. Of course, the “80 percent” estimate is a number for which I have little firm basis– probabilities can only be stated for uncertainties of known structure, and this is one of unknown structure– but I don’t consider it far off base. Asymptotically, the probability of corporate capitalism’s eventual fall is near 100 percent. I consider the 80-percent estimate to be appropriate to its overthrow, and its subsequent replacement by a society that is better (likely, but not guaranteed, especially in the undesirable case wherein the revolution becomes violent) rather than worse, within the next 50 years. If it is achieved, the potential to solve otherwise intractable social problems (since most of the nation’s problems emerge, at root, from poverty and classism) becomes immense. When the U.S. disinherits its abusive, parasitic upper class (by removing the privileges associated with wealth as well as wealth’s self-perpetuating enormity) and eradicates poverty within a generation, the improvements it will confer upon the general well-being of all will be massive.

With all this in mind, the overthrow of corporate capitalism seems not unlikely or speculative, but inevitable. It may not happen as soon as one may hope, but as an eventual outcome, I consider it certain.

A Nonconventional Philosophical Argument for Survival

In this essay, I present one case for the existence of an afterlife. It is not a scientific argument. It’s certainly not a proof, and offers nothing in the way of scientific evidence. There is, as of now, nothing nearing proof of any sort of afterlife, and although reincarnation research is promising, it’s in such a primitive state right now that nothing it offers can survive the sort of hard rigor science requires, and if its findings were to be true, they would only raise a host of new questions. Hard evidence is scant on all sides of the debate and, scientifically speaking, absolutely no one knows what happens after death.

In fact, I’d argue that the physical sciences, as they are, intrinsically cannot answer many questions of consciousness including, most likely, the matter of the afterlife. Physical science relies utterly on a certain pragmatic reduction: two physically identical objects– that is, two objects that respond identically to all objective methods of observation– are considered equivalent. Scientists need this reduction to get anything done; if carbon atoms had individual “personalities” that scientists were required to understand, one simply couldn’t comprehend organic chemistry. So scientists generally assume that a carbon atom in a specific state is perfectly identical to another carbon atom in the same physical state, and this assumption proves valid and useful throughout all of the physical sciences– biology, chemistry, and physics.

Where this reduction fails, and probably the only place where it fails, is on the question of consciousness. Let’s assume the technology exists, at some point in the future, to create an exact physical copy of a person. (It’s unlikely that this will ever be possible with sufficient precision, due to the impossibility of reproducing an arbitrary quantum state, but let’s assume otherwise for the sake of argument.) Assuming that a “spark of life” can be injected into the replica, this person is likely to be indistinguishable from the original to all observers except for the individual copied and the copy, who might retain separate consciousnesses. Will this newly created person– an operational biological machine, at least– have a consciousness or not? I’m agnostic on that one, and there is no scientific way of knowing, but let’s assume that the answer is “yes”, as most materialists (who believe consciousness is a byproduct our purely physical processes) would. Will he or she have the same consciousness as the original? Everything in my experience leads me to believe that the answer is no, and the vast majority of people (including most monists) would agree with me. The original and the copy would, from the point of creation, harbor separate consciousnesses (they are not physically linked) that would begin diverging immediately.

This, to me, is the fundamental strike against mind uploading and physical immortality. It may be physically possible to copy all of a body’s information, but commanding its consciousness (after destruction of the original) to bind to the copy is impossible. It’s extremely likely that humans will defeat aging by 2175, if not long before then, meaning that the first 1000-year-old will be born before the end of my natural life (ca. 2065). But I do not expect any success whatsoever in the endeavor of mind uploading; destruction of the whole brain will always spell out the same fate that it does now: the irreversible end of one’s earthly existence. (Fifth-millennium humans are likely to confront this problem by storing their brains in extremely safe repositories, while interacting electronically and remotely with robotic “bodies” in the physical world as well as virtual bodies in simulated worlds.) With this in mind, as well as the probable impossibility of physical immortality given the likely eventual fate of the universe, it should be obvious even to the most optimistic transhumanists what nearly all humans who have lived have taken, and most humans even now take, for granted: we will all die. This, of course, terrifies and depresses a lot people, involving a credible threat of nonexistence and even the (very remote, in my opinion) possibility of fates far worse.

I’m going to put forward an argument that suggests that, if there is a reasonable universe, our consciousness survives death. This is an argument that, although it proves nothing and relies on an assumption (a reasonable universe) that many reject, I have not heard before. At the least, I find it viscerally convincing and interesting. Here is that argument: virtually every phenomenon humans have ever investigated turns out, in truth, to be far more fascinating than any hypothesis offered before the truth was known.

1. Math

I’m going to start this discussion in mathematics with a familiar constant: pi, or the ratio between a circle’s circumference and its diameter. Believed in Biblical times to be 3, it was later estimated with ratios like 22/7, 201/64, 3.14, or the phenomenally accurate Chinese estimate, 355/113. Archimedes, applying contemporary knowledge of geometry to the regular 96-sided polygon, painstakingly proved that this ratio was between 3 10/71 and 3 1/7, but could not determine its exact value. One can imagine this fact to be distressing. All of these estimates were known to be only approximations of this constant, but for two millennia after the constant’s definition, it was still not known whether an exact fractional representation of the number existed.

A similar quandary existed surrounding the square root of 2, which is the ratio between a square’s diagonal and the length of one of its sides, although this number’s irrationality was far easier to prove, as the Pythagoreans did some time around the 5th century BCE. Before the Pythagorean proof of the irrationality of the square root of 2, and Cantor’s (much later) proof that an overwhelming majority of real numbers must be irrational, it was quite reasonable to expect pi to be a rational number. Before the Pythagorean discovery, the only numbers humans had ever known about either were integers (whole numbers) or were, or could be, the ratio of two integers. No one knew what integral ratios the square root of 2 or pi were, but it must have seemed likely that they were rational, those being the only numbers humans had the language to precisely describe. Of course, it turned out that they were not, though pi’s irrationality was not proven till the 18th century, more than two millennia after the establishment of irrational numbers.

At least some people did not want this. They desperately wished to find the ratio that was pi, and only found in the end, that none existed. Even the advent of algebra in the first millennium CE did not make pi accessible, since the number is (again, as proved much later) transcendental: unlike the square root of 2, which can be algebraically extracted (the solution to x^2 – 2 = 0) from the rationals, pi cannot. This made life and mathematics a fair bit more difficult, and many may have met the discovery of irrational numbers with displeasure, but it certainly made mathematics far more interesting.

Pi emerges, sometimes unexpected, in all of mathematics. Just for one particularly elegant example, the infinite sum of reciprocal squares (i.e. 1/1 + 1/4 + 1/9 + 1/16 + 1/25 + …) is pi squared, divided by 6. Although no more than 100 digits of pi are needed to provide sufficient accuracy for any physical purpose, we have algorithms today that enable us to generate pi’s digits (into the billions) extremely quickly. The number may be inaccessible through ratios and algebraic expressions, but we can very easily compute it with as much precision as we wish, which is more than can be said for the truly inaccessible noncomputable numbers. Still, we can’t answer some basic questions about it. Whether the digits of pi are normal (that is, behave as if generated by a uniform, random source) is an open question. Strange statistical patterns (such as a paucity or even eventual absence, possibly after 10^200 digits, of ’3′ digits) may exist in pi’s digits in some base, but it is utterly unknown whether any do.

The beauty and “interestingness” of mathematics are difficult to put into words, but I would argue that they stem from apprehension of the infinite. As soon as the concept of prime numbers existed, people must have desired to attain a list of all of them. People are natural collectors, and the acquisitive drive must have led many to wish to “have” all the prime numbers. Using a beautiful argument that a modern high schooler could understand, Euclid proved this impossible: there are infinitely many of them. This marvelous result established that, although we cannot “reach” the infinite, we can reason about it in non-trivial ways. In my opinion, Euclid’s theroem is the birth of modern mathematics, which (even in its finite and discrete manifestations, where asymptotic behaviors and general patterns within the finite are the true objects of study, due to humanity’s insatiable curiosity about what is next) is the art of reasoning about infinity.

From such findings, a magnificent number of beautiful, surprising, and awe-inspiring results followed. Cantor proved that not all infinities are equal, but that for each infinity we can define a far larger one. Later, as formal mathematics is an infinite collection of statements generated by a finitely-describable set of statements, Gödel helped establish the ability of mathematics to reason about itself, in fact proving the incompleteness of any formal system: no logical system capable of arithmetic will be able to decisively prove or disprove all statements. (A byproduct of his doing so was Gödel’s embedding of a list-processing language into number theory, arguably inventing an ancestor of Lisp.) Alonzo Church and Alan Turing established analogous results regarding computation, and a consequence of their work was no less than laying the foundations for modern computer science.

Despite the obvious epistemological problems with “counterfactual mathematics”– pi simply could not be any other number– I’ll note that if pi had been rational, had the list of primes been finite, or had formal mathematics been complete, the world would have been a far more boring place, and much less would have been done with mathematics.

2. The physical sciences

In mathematics, people generally agree on what they know and what they don’t. If the question of the completeness of formal systems could be put in a way that would have made sense to a 5th-century BCE mathematician, he would admit that he had no idea whether the proposition was true or false. In the natural world, this is true among scientists, but it’s not true among people as a whole. At least in the context of large groups willing to believe the most credible explanation put to them; people, when they don’t know something, seem to make up stories about it. For as long as humans have existed, they’ve invented explanations for natural phenomena. Those explanations have mostly been wrong and, moreover, quite frankly a bit boring.

Ancient Greeks, at least among the less educated, seemed to believe that lightning was a bolt of fire thrown by an ill-tempered man named Zeus. Boring, and wrong. In fact, it’s the energy transfer admitted by the motion of subatomic particles, governed by attractions and repulsions of truly immense force; an energy source that, if harnessed properly, enables a host of extremely powerful technologies that are only in their infancy to this day. Interesting, and right. Before Newton, earthly objects were believed to fall because they possessed an intrinsic material property called gravity, while the heavens possessed levity and could remain aloft. Boring, and wrong. We now know that these behaviors can all be explained by a single force (gravity) that not only allows us to reason about cosmic machinery, but also admits such bizarre beasts as black holes and general relativity. Interesting, and right. Likewise, the stars were once held to be tiny holes in a giant black dome behind which a brilliant fire burned. Boring, and wrong. In fact, they’re massive nuclear-powered furnaces, borne of gas clouds and gravity, that glow for billions of years, occasionally eject hot, ionic material, and sometimes die violently in a style of explosion (supernova) of such atom-smashing force as to create chemical elements that otherwise should not exist, and we require many of those elements for our existence. Interesting, and right. Finally, it was once believed (and by some, it still is) that our bodies and those of all animals were designed from scratch and immutably fixed by a deity with specific tastes but a tendency toward slightly sadistic design. Boring, and (almost certainly) wrong. We now know that all of these species emerged from four billion years of natural selection, that an enormous number of powerful and strange animals once existed, and that accelerations of this evolutionary process began happening (in the context of an immensely complicated and frenetic terrestrial ecosystem) half a billion years ago, and continue to this day. Interesting, and right.

The general pattern is this: humans invent an explanation that is the best they can come up with at the time. It turns out to be primitive, wrong, and boring. Accumulation of knowledge and the invention of superior tools allows people to discover the truth and, although it offends many greatly, it ends up being far more interesting, and inevitably ends up opening a number of fruitful questions and applications. The right answer is always more fascinating than the explanations given, and it always opens up more questions.

3. Afterlife

I’m a deist. I do not believe in an anthropomorphic, interventionist deity, and certainly not in the villainous, supernatural characters of Bronze Age works of literature that, if taken literally, are almost certainly heavily fictional. However, I have faith in a reasonable universe. I admit that this is an extraordinary claim. For example, it’s not impossible that, in the next 5 seconds, my consciousness will, with no cause, abruptly leave my body and enter that of the man shoveling snow outside my window, or of a 7th-century Chinese peasant, or of an extraterrestrial being in another galaxy. I simply believe it will not happen. I likewise admit that it is possible that I am suddenly eradicated by the sudden dematerialization of every atom in my body, but I regard even asteroid strikes and random heart attacks as far more credible threats to my existence.

If we live in an unreasonable universe, we can’t reason about or know anything. In an unreasonable universe, I might exist only this second and my memories of previous existence may be false.  There is truly nothing we can know, reason about, or credibly believe in an unreasonable universe. There is just absurdity. An unreasonable universe doesn’t mean that we can’t use reason and do science– on pragmatic grounds, we can, so long as they work. An unreasonable universe merely admits the possibility that they might stop working– that, two seconds from now, the gravitational constant may begin doubling every nanosecond, collapsing our world instantly into a black hole.

Most religions posit a fundamentally unreasonable universe governed by capricious gods, but materialistic monism also establishes an unreasonable universe. Although the abiogenic origin of life and the evolution of complex organisms can be explained (and in evolution’s case, already has been adequately explained) by natural processes, the emergence of qualia, or consciousness, out of an elaborate pinball machine posits an unreasonable universe in which conscious beings pop into existence merely because a sufficiently complex physical system exists. It is deus ex machina, except with somewhat less impressive beings than gods emerging from it. Such a universe is an unreasonable one. This does not mean that materialists are wrong! It is reasonable and defensible to believe in an unreasonable universe, and moreover it is unreasonable to outright reject the possibility of an unreasonable universe, when absolutely no proof of a reasonable universe has been made available. It is faith alone that leads me to believe in a reasonable universe.

For a note on that, I acknowledge that the “reasonability” of a universe does not always correlate with its plausibility, as I see it. The materialists, for all I know, could be utterly right. In fact, I find the unreasonable universe of the materialist atheist far more credible than the perverse and semi-reasonable universe of Biblical literalism (which has the obvious markings of people making shit up to terrify others and acquire power) from any Abrahamic religion. If I were forced to believe in one or the other, I would take the former without hesitation.

I believe in a reasonable universe, and I find myself asking, “What happens after death?” I must admit that I don’t know. I don’t think anyone knows. The best I can do is look for patterns based on the (possibly ridiculous and wrong) assumption of a reasonable universe. However, the pattern that I’ve seen, as I’ve discussed above, is that virtually every question about the world turns out to have an answer that is far more interesting than any explanation humans have invented. It is reasonable, although not certain, that the same pattern applies to death. When the truth is revealed (as it is to all of us, unless there is no afterlife) I expect it to be far more interesting than any scenario humans have invented.

Afterlife scenarios invented by humans are always either insufferably boring, or (in the case of hells) non-boring only on account of being so terrible (but if eternal hells exist, we do not live in a reasonable universe). Materialists believe that consciousness is purely a result of physical processes and therefore annihilated upon death. Boring. “Pop” Christianity posits a harp-laden, sky-like heaven in which the dead “smile down” upon the living– a modern form of ancestor veneration wherein heaven is like a retirement community. Boring. Biblical literalists believe in a final battle between “good” (represented by a murderous, racist, ill-tempered and misogynistic deity) and “evil” in which the final outcome is already determined. Boring. Ancient Greeks believed the dead lingered in a dank cave overseen by a guy named Hades. Boring. All of these have the markings of being the somewhat creative but utterly boring ad unfulfilling explanations that humans invent when they don’t understand something.

I haven’t yet discussed reincarnation, some sort of which I believe is the most likely afterlife scenario. It’s not boring, but “reincarnation” is not so much an answer as a proposition that raises more questions. “Reincarnation” is not a specific afterlife so much as an afterlife schema admitting a plethora of possibilities. Questions raised include the following. What, if anything, happens between lives? Is our reincarnation progressive, as indicated by what seems like an intensifying trend of increasingly complex consciousness (and incremental improvements, over the course of history, in human existence) in this world, or is it stagnant, chaotic, cyclical, or even regressive? Does a deity assist us by lining us up with appropriate rebirths, or do we decide on our rebirths, or is the process essentially mindless? Can humans reincarnate as animals, or as beings on other planets? How atomic is the “soul”, i.e. does it carry a personal identity, as Hindus assert, or is it as much affected by the forms it takes as it affects them, as many Buddhists assert? What is the role of the impersonal, almost mechanical force known as karma, do any deities intervene with it and, if so, how and why?

I have my beliefs, not perfectly formed, on all these matters, and I admit that they are artifacts of faith. They emerge from my (possibly ridiculous) faith in a reasonable universe and my estimate of what a reasonable universe, after death and from a vantage point where these questions might finally be answerable, might look like. I am, of course, just one human trying to figure things out. That is the best I believe any of us can do, since such animals as “prophets” and the gods they have invented almost certainly do not exist.

I’m deeply agnostic on many matters, but if asked what happens after death, or what is the meaning of life, I’d answer as follows. No one knows for sure, obviously, but I’m overwhelmingly convinced that the answer is far, far more interesting than any explanation put forth by humans thus far (including any that I could come up with).

With that, I yield the floor.

United States 4.0, or: why you should welcome American socialism

Since the American Declaration of Independence, three distinct national identities have existed. Historians have characterized them in a number of ways depending on what aspect of the nation they wish to analyze, and I will do my best to do so in a way that is economically interesting, and that can inform us on our future. In the case I’m dealing with, these delineations have been gradual rather than sharp, and it’s important to note this, and the truth is that all of these aspects of our history remain in our culture today. Still, for the sake of simplicity and analysis, I must define (somewhat arbitrary) boundaries. From 1776 to approximately 1870, we were a nation of citizens. From about 1870 to approximately 1950, we were a nation of producers. From about 1950 until now, we’ve been a nation of consumers. Perhaps for wholly coincidental reasons, each of these transitions has coincided with a difficult and violent period of history. The Civil War in the mid-19th century was one of the nation’s bloodiest, and the World Wars of the 20th-century were utterly catastrophic for Europe. Likewise, the American transition to its next phase will coincide in time with a World Revolution that, although I wish for it to be entirely peaceful– and noting the example offered by Northern European nations that have already peacefully adopted rationalistic, libertarian socialism, I believe it can be nonviolent– probably will not be so, at least not in all corners of the world where it will take place. Before discussing the American nation’s next incarnation, it’s worth discussing the advances and the ultimately fatal flaws of the three that have existed.

1. Citizen America: rational but elitist, enlightened but hypocritical.

The United States, despite the tarnished reputation it has earned on account of its hypocritical, underaccomplished and already-dying Empire, deserves one hell of a lot of credit. For all its flaws, it’s a great country, and this nation was one of the modern world’s first attempts, if not the first, at rational government at such a vast geographic scale. Stepping away from the unreliable leadership offered by hereditary kings and religious clerics, the nation’s architects designed a political framework with the intention of building an enlightened republic. They certainly did not, for the most part, intend direct democracy. This concept seemed radical even to most of them. What they wanted was a nation governed by what would today be considered an aristocracy, but for the benefit of all people, in which the most educated and genteel 1-20 percent would be citizens, or peers, with the right to vote and the same legal status as a legislator or president.

Often it’s claimed that America’s founders would be appalled by the state of the nation today, either because its integrity has been compromised by plutocracy (as the left alleges) or because the federal government has become so expansive (as the right alleges). I disagree. As educated and rational people, they knew that even the best governmental structures can only mitigate the innate instability of popular governments. I think they should be pleasantly surprised, if not shocked, that the government that they built (a) actually tried democracy, to mixed results but certainly more success than even an optimist in their time would have predicted, and (b) remained intact for over 200 years, even in the midst of true revolutions (some peaceful, such as FDR’s New Deal; others not). Nations live a long time, but governments very rarely survive even one human lifetime, much less three.

Thomas Jefferson envisioned an agrarian utopia in which farmers would plow the fields in the summer and study the classics in winter. Federalists like Alexander Hamilton wanted to use the new nation’s abundant land and natural resources to build an industrial powerhouse. Rationalistic freethinkers like Thomas Paine and Ben Franklin wanted to establish a fully secular government and a nation in which any religion, so long as it did not oppose its will on others, could be honored. To far more success than a cynic would have imagined, these visions were realized and, for quite some time, worked.

The Jeffersonian notion of an agrarian utopia and the Federalists’ championing of industry deserve special mention in light of how contrary they were to the more cynical and pessimistic view of life common in much of Europe at the time. Malthus, the ultimate pessimist, argued at the 18th century’s end that the world population would reach such a state of congestion as to wreak apocalyptic conditions upon the human species. His mathematical model (which held economic growth to be linear, rather than exponential, leading to its inevitable failure to match the pace of population growth) was completely wrong, but his conclusion agreed with much of popular thought, and it would have been correct had the Industrial Revolution (already in its early stages) not hastened. The Malthusian worldview was like that of mercantilistic economics, which held a zero-sum view of economics. For a contrast, self-reliant farmers and creative industrialists embody the opposite of zero-sum behavior; they more wealth to the world than they take from it. (Although industrialists could be cruel and self-serving, their efforts were undoubtedly positive-sum, at least until externalized environmental costs became the evident and severe problem they are now.)

This “Citizen Nation” had its share of quite obvious problems. Though it championed positive-sum progressivism, it was founded on land that was stolen in a campaign of execrable violence against indigenous people. Moreover, by modern standards it was elitist to the point of repugnant hypocrisy. Black slaves were treated abysmally, and an underclass of immigrant and freed-slave workers emerged in the 19th century, especially in Northern cities. For every abolitionist, feminist or liberal wishing to expand the definition of “citizen”, there was a status-quo conservative trying to hold this pressure back. This tension, as Americans all know, resulted in the Secession Crisis and Civil War in the 19th century.

Each of these iterations of the United States is a world-leading liberal model on its inception and falls prey to reactionary conservatism toward its precarious end. Thomas Jefferson, an enlightened statesman in his day, would be a reactionary and an egregious racist by modern standards. Those who stood with him, ideologically speaking, in 1785, and had not moved by 1855 found themselves on the wrong side of history, especially on the matter of slavery. They had become like Preston Brooks, much more the father of the “Tea Party” conservative movement than any of America’s 18th-century “founding fathers”, who would despise that movement’s religious radicalism and anti-intellectualism.

By Reconstruction, the definition of “citizen” had expanded greatly. Although not there yet, the nation was well on its way to universal suffrage. Though a positive change on balance, this also diluted the meaning of the word “citizen”. Suddenly, penniless people who were working 16 hours per day and therefore, through no fault of their own, utterly ignorant on complex matters, were trusted with the vote. Although necessary from a humanitarian perspective, since the “enlightened aristocracy” could not be trusted to rule in the peoples’ best interests, the extension of suffrage to poorly-educated workers led, in part, to the corrupt machine politics of the Gilded Age.

2. Producer America: scientific but bellicose, industrious and cruel.

The American nation of the Gilded Age was, by historical standards as well as comparative standards in its time, very wealthy. Although this period is sometimes regarded as a natinal nadir, it was the point at which the average American’s standard of living began to rise at a perceptible pace. The average person’s standard of living actually began improving a couple centuries before that, but so gradually that it could not be perceived from year to year or even from decade to decade; by the 19th century, this changed and progress was evident. Of course, the distribution of these gains was not merely unjust, but laughably lopsided. At the same time, ethnic tensions were growing rapidly. Political corruption was high, and toward the end of this era, its government had slouched toward plutocracy.

Due to the Industrial Revolution, there were immense gains made in this much-maligned era. Electricity became available, and food became abundant. Petroleum was discovered, preventing an otherwise disastrous scarcity of whale oil (much like that which awaits us in a couple of decades if we do not move away from our dependence on petroleum) from upsetting economic progress. The progressive movement brought suffrage to women and made the first (disastrously failed) efforts at ending international conflict. Moreover, a nation that had once been deeply racist had been forced, by Producer America’s end circa 1950, to not merely tolerate but to embrace ethnic diversity.

An important cultural change emerged around this time. The Puritan work ethic had been replaced by a more enlightened descendant: the American work ethic. Although it may seem bizarre to associate an “American work ethic” with virtue, with the nation wracked then and now by overwork, it was in many ways a positive force. With what seemed to be the final destruction of aristocracy (before its re-emergence as corporate capitalism, with its entitled and crypto-hereditary caste of “executives” who only ape the trappings of those who work) in the United States and Europe, the goal of many people became not to establish a parasitic lifestyle in which they did not work yet had others work for them, but a productive lifestyle in which, due to the creative use of technology and social advancements, they could work and have a life of quality. Technological leverage made it possible to produce at a high level without sacrifice and toil. In essence, that is the goal of the American worker: to live a life that is of value to others, but to do so without such severe sacrifice (as most working people, historically, had to make) as to render that life miserable for the person living it.

Because its vices and crimes were like those of our much milder present Gilded Age, the faults of Producer America hardly need explanation. The poor were treated abysmally, working conditions were calamitous, and democracy fell at the expense of plutocracy. Although the era was rich in comparison to what came before it, and idyllically peaceful on inception compared to the horrific wars that came at its end, the Gilded Age lingers in the American imagination as a warning of what not to become (and also what we, for the past 30 years or so, have been rapidly becoming): a corrupt plutocracy. The danger, one notes, of a limited, enlightened, and libertarian government is that unelected and often unchecked corporate power may step in and become just as onerous as the monarchies, empires, and theocracies that reigned before the Age of Reason.

Producer America became a victim of its own success. Abundance of crops, which any previous historical era would have considered a wonderful problem to have, caused a severe drop in food prices, leading to rural poverty. As poverty is a cancer that, without mediation, grows until it devours a whole society, this led to the Great Depression. Following that was a peaceful and entirely legal revolution known as the New Deal, against the backdrop of some horrific and violent ones (often in the name of extreme leftism) in much of the world, and the Second inning of the Great War which destroyed the intellectual respectability of racism (although one must note that racism never should have been intellectually respectable) and established colonialism to be a calamitous failure.

Keynesian economics was also born during this time and, with it, the recognition that it was useless to produce goods if no one could afford to buy them. Economists discovered that, contrary to the claims of conservative moralists, poverty did not “build character” or discourage laziness, but was a systemic malignancy capable of destroying an entire society. This led, logically, for the need of government to aggressively fight poverty. Also, the New Deal and the demand generated by American participation in the Second World War created, for the first time in history, a true middle class that encompassed the majority of a nation’s population. New products such as refrigerators, air conditioning, and air travel became not merely available, but available to average people shortly after their inception. Consumer America had begun.

3. Consumer America: brilliant but hollow, powerful but fragile.

Consumer America began with an era that is often considered to be the American “Golden Age”, spanning from 1945 to 1973. Although deeply flawed, especially in the treatment of women and racial minorities, this was a time of widespread economic and technological optimism within the United States. The forty-hour work week– a major victory– had been achieved by the labor movement, and children born in the 1950s believed that a 20-hour workweek would be established once they were in middle age. (Whoops!)

AMC, perhaps by an odd coincidence, is running two television shows focused on the exact high and low points (temporally speaking) of Consumer America: Mad Men and Breaking Bad, erotic and thanatoptic portrayals of an era’s bookends. The first of these shows, set in the advertising industry in the 1960s, illustrates the giddy optimism (coupled with the necessary antithesis: the sardonic cynicism of the era’s most intelligent, embodied in Don Draper) of a nation flush with new products, and in which the merely middle-class strivers of Madison Avenue can look forward to assured sunny (but perhaps a bit boring) futures. The latter drama, situated among the present day’s former middle class, revolves around the misery and severe, self-serving misanthropy of a failed genius and high-school chemistry teacher, forced by the 21st-century American pogrom (eradicated by the now more civilized Europeans) that we call “medical billing”, into methamphetamine production. In these dramas, we see not only the ascendancy (for Don Draper) and collapse (for Walter White) of brilliant, cynical, ambitious and deeply dishonest men; but those of a society itself, and that’s Consumer America.

Consumer America’s moral emptiness became apparent early on, but proved the severity of its malignancy in the 1980s with the “Reagan Revolution”. In the 1950s, it may have been boring and empty, but it was inclusive. The most selfish and uncivil elements of society wanted to come to affluence far faster than others, but they wanted everyone to get there. If for no other reason, the betterment of all would build a more stable society, and even the most selfish rich people want that. With the belligerent post hoc elitism of the yuppie era, that changed, replaced by a mean-spirited and exclusive mentality rooted in the assumption that a thing was not good unless other people could not have it. The reactionary politics that became stylish, even among the educated upper-middle classes, in the 1980s gave encouragement to the religious radicalism, the so-called “neoconservatism”, and the militant right-wing insanity that ravaged the nation in the 1990s and 2000s.

A deep moral failure in Consumer America is the worship of consumption, even at the expense of production. The claim underlying the enormous rewards given to capitalism’s winners is that they’re the most productive people, but often that’s not the case, and it’s such a thin argument that only an idiot would buy into it. Far more often, they are hyperconsumers but not high-power producers. For example, celebrity culture encourages the worship of those who already consume a great deal of others’ attention. Here is the naked absurdity of post hoc elitism: because they are able to consume at enormity, these “VIPs” are held to be of higher value, and on no other basis. Then there are the “executives”, the social climbers within private-sector bureaucracies, who have managed to acquire the status of being capitalism’s high priests merely because they’ve established the acumen to become hyperconsumers. Unlike actual entrepreneurs, who actually were high-power producers, they are merely adept consumers and social climbers, no different in form or activity than their counterparts in the aristocracy of 18th-century Versailles or corrupt clergies. Meanwhile, when the hyperconsumptive are getting welfare hand-outs in the form of elitist and unnecessary tax breaks, those who actually produce things are getting hosed, in the form of outsourcing, layoffs, stagnant wages for over a decade, and a government that seems to have little regard for their interests. Those who actually work in this country are held in low regard, but the useless elite clerics of these empty gods (immortal and without character; unimprisonable and thus fearless) we call corporations make millions and run the country.

Consumer America is headed toward its fiery end. It had two high points– one in the 1950s and ’60s, the second in the late 1990s– but its inexorable decline began in 2001 when it became clear that this morally empty regime could not head off otherwise surmountable calamities. An educated and virtuous citizenry would have never allowed Bush to win re-election after lying our way into an illegal, unnecessary, enormously expensive, idiotic and evil war. Able and effective civil authorities, responsible for infrastructural integrity, would not have allowed a hurricane of only moderate severity (Katrina was severe at its peak, but only Category 2-3 when it hit New Orleans) to demolish a major city. Now, in the form of the Tea Party, the specter of right-wing violence has re-emerged as a frightening possibility. In 2011, it seems evident that Consumer America is about to end, abruptly and possibly violently.

As I discussed, the transition from Citizen to Producer America took place in the context of the Civil War. That from Producer to Consumer America took place during a peaceful and truly glorious revolution in the United States (the New Deal, the rise of progressive capitalism in the U.S. and, later, social democracy in Europe) but while one of history’s most horrendous wars raged abroad. It is possible that the transition out of Consumer America, which will kill corporate capitalism and humble our upper class– benign compared to historical counterparts, but exceedingly arrogant– will be peaceful. Certainly I hope for this. It could also be enormously violent, in the context of a Paris-1793-style uprising.

4. World Revolution, and American socialism

A World Revolution, launched by the Internet as well as Europe’s experiments with federalism, libertarian socialism, and Second Enlightenment humanism, is taking place. Likely to continue for 100 years, it will radically reshape the globe. For one thing, the concept of an impoverished “Third World” country will seem utterly bizarre to the children of 2125 when they learn of this notion in their history classes. The severe relationship between geography and economic well-being that exists now will (rightly) strike them as barbaric. At the World Revolution’s end, we will have a “wired” world characterized by rationalism and libertarian socialism. We may have achieved the indefinite lifespan by that point, and we are likely to be “post-scarcity” to a substantial degree. (The World Revolution is a colorful name for the frenetic stage of humanity one might call trans-scarcity.) By 2125, barring an ecological or exogenous catastrophe, both the intrinsic scarcity of primitive times and the artificial scarcity of corporate capitalism will be abolished, and being “poor” will mean having to get on a waiting list to visit the Moon. Utopia will not be accomplished, but what is achieved by that time will be closer to it than most people alive today even consider possible. Yet the question for those of us alive now, who do not expect to live so long, is: what will it be like to get there?

I’d love to believe that each chapter of the World Revolution shall be peaceful, but we’re already seeing that this is unlikely. Odds of a peaceful transition are good-to-excellent in the social democracies of Northern Europe, poor-to-fair in the United States, due to the likelihood of an authoritarian crackdown by our corporate elite, and very poor in corners of the world where violent tendencies still reign, and in which corrupt elites will wish to prevent their subjects from having access to the increasing bounty made available by technology (a tendency we already see in countries that disallow their people to use the Internet).

The first stage of trans-scarcity humanity, which began in the 1990s and could either end soon or persist for decades, has proven to be quite harsh for the United States. As I discussed, the spiral of poverty that created the Great Depression began with rural poverty, a result of agricultural plenty. The same is happening to all human labor in the United States. Human labor is gradually becoming obsolete. The ability for a well-positioned and bright person to earn a decent living in technology will exist for a few decades at least, but most people will never again be able to reliably earn a living selling labor to the market. Don’t look for that to come back; it never will. It is already at the point where one year of unpaid education or training is necessary to secure two years of paid work, and as the workplace becomes increasingly specialized, this ratio will deteriorate. The civil unrest this will create, in a society with a weak and denuded social safety net, will be immense.

The United States will eventually, as all countries must when confronted with this “problem of plenty”, wise up and choose libertarian socialism, eventually instituting a guaranteed minimum income and freely available training for what (very highly skilled) work remains necessary, but this isn’t likely to happen without a fight. Our upper classes profit enormously from the artificial scarcity imposed by corporate capitalism, and have no problem with using that system’s meltdown to increase the market price of their “protection”, establishing themselves in a similar manner to feudal Europe’s nobility. This is what they want: scarcity and fear instead of reason and plenty. In truth, they don’t care if the economy “collapses” from a middle-class perspective. The current American elite would love to see a lot of middle-class office jobs disappear outright because they really want to be able to hire white, college-educated maids at low wages.

It’s an open secret that the American corporate elite is preparing for violent unrest. The use of private mercenaries such as Blackwater/Xe Security in overseas conflicts is part of this, and the threat these forces will cross the Rubicon is serious. Also, there is the Tea Party. This movement itself is unlikely to become a major menace. Still, this not-really-grassroots movement deserves attention for the machinations (cf. Koch brothers) that its existence proves. While the Tea Party’s main purpose was to win the 2010 election for the Republican Party, and its secondary purpose is to discourage the left-leaning and disaffected from even considering violent revolution– it reminds us (sadly, correctly) that, if there is a violent revolution, it almost certainly won’t be a kind we would want– there is also much experimentalism being done. Fox News may seem like a sick joke, but it’s an experiment designed to assess how much bullshit the American people will take. Data is being collected on that, and if America’s reactionary movement ever needs a Goebbels, this data will be made available to him or her.

To call these efforts of the upper classes “conspiracies” is not a stretch. There almost certainly is not “one Conspiracy to rule them all”, and organizations like the Bilderberg Group and Skull and Bones almost certainly have less power than their detractors think they do, but lower-case-c conspiracies certainly exist, and aren’t even well hidden. They don’t need to be. With “friends” (legal, above-ground, but secretive and elitist institutions) like the corporations, who needs the shadowy enemies dreamt up by conspiracy theorists? The upper classes are self-serving, greedy, inbred and socially exclusive. This makes them innately conspiratorial with absolutely no need for cloak-and-dagger secret societies or the laughably simplistic intrigues dreamt up by conspiracy theorists.

On the other hand, the American elite, devoid of vision and purpose other than pure greed, might allow peace, accepting a decline in their relative status, just as European nobilities did in the wake of the French Revolution. One can only hope so. There is no reason whatsoever that the transition to libertarian socialism requires violence. It is just a sad and miserable fact that the American upper classes are likely to instigate it in order to preserve their power and social status. If they do use such violence, we have every right to respond. With 45,000 people murdered every year by health insurance companies, we were merciful to remain nonviolent as long as we have.

Despite the dark possibilities of the short-term future, once the World Revolution has resolved itself and we are into late trans-scarcity or early post-scarcity times, life will be quite excellent. Although the ability for an average or even well-above-average person to reliably “earn a living” by selling labor to the market will be gone forever, it won’t be needed in a plentiful world with a basic income that grows increasingly generous as the decades pass.

The comfort of a post-scarcity world is self-evident, but the secondary effects of it will be immense and predicted by few. When technology removes most of the drudgery involved in large-scale efforts, and when people are relieved of the crushing and all-consuming need to do paid work, which is often servile and of little general value, the masses will be liberated to concentrate on real work. The arts, science, spirituality, community service, experimental small businesses (startups) and education will flourish like never before, and humanity will shine to a degree that seems impossible at this time. Levels of intellectual brilliance and creative contribution once associated with “genius” will become commonplace.

To answer the obvious question, “Who will clean the toilets?”, the answer is that in libertarian socialism, people will still do such jobs, because other people will still be able to pay them to do it (manually in the early phases; technologically, later on). Libertarian socialism does not eliminate the free market; rather, it ensures a basic social safety net, and then it gets out of the way and allows at truly free market economy to work. Free the people, then free the markets.

Corporate capitalism, in the current semi-totalitarian form enabled by the constant need for paid (read: corporate-approved) work, placing the average American in a system that is the most fluid, affluent, and benign form of slavery known to history, but still wage slavery, will be relegated to history’s dust heap. Business corporations will exist, just as religious institutions, governments, and even hereditary monarchies still do (often in reduced and benign, or even beneficial, forms) in the Developed World. That they will exist is necessary, since rational libertarian socialism must recognize the right of the individual to start a business and become prosperous, and the result of success in business can be a large corporation. Some people and companies will get very rich and become quite influential, far above their peers in this regard. However, the oppressive and deleterious power currently held by the American elite will no longer exist, once people are no longer dependent on them to earn a living wage.

Examining social and technological trends, the conclusions become obvious. Consumer America will end, perhaps with much fire and gnashing of teeth, and a socialist United States will replace it. Culturally, how will this America look? My guess is that it will contain the best aspects of Citizen, Producer, and Consumer Americas.

In a wealthy, fair, and rational world, people can be educated and true democracy becomes viable and stable, something that has been utterly untrue throughout most of humanity’s history due to the ignorance that economic scarcity breeds. Citizenship can come back, but in a form that is available to the masses rather than an elite. In short, the education that will be available to a truly free people will allow democracy to actually work. (There will always be some who are willfully ignorant, as Palinism establishes; our job in a post-scarcity world will be to encourage such people to treat life as a vacation, and to live well but harmlessly.)

Likewise, the virtues, but not the vices, of Producer America are likely to come back in a world where the average person has economic freedom. Liberated from the need to find corporate-approved work, people will be able to pursue work at which they are actually productive, rather than enduring the pointless servility of work for the lower classes, or the insipid social climbing of the white-collar elite. Attitudes toward work will fundamentally change. Instead of work being a place where the average person takes orders and is subjected to the mean-spirited infliction of stress, it will be one where he or she contributes. In a world of material plenty, people will hunger for opportunities to produce rather than consume, consumption being so freely available and easy as to be of minimal interest, just as food is not an all-consuming point of focus healthy, well-nourished people. The right to consume will not be something people fight bitterly in order to secure; it will be guaranteed, and people will focus their energies toward production, and work out of a genuine desire to make a better world.

Finally, it should go without saying that socialist America will preserve the affluence, racial and sexual tolerance, optimism, and creativity that Consumer America had when it was at its best. When the root evil of scarcity is eradicated, branches like racial hatred don’t stand a chance. There will always be some people with evil intentions, but their numbers are few and only scarcity can provide them with their armies of desperate, poor, ignorant, angry and confused people. Scarcity, thus, lends power to the evil. They will be unable to “rabble-rouse” when there is no rabble to be roused.

Moreover, once we arrive at a post-scarcity world, if not before that, it will no longer make sense to speak of the American theater as a separate entity. The World Revolution, the victory of second-enlightenment ideals and libertarian socialism, and the resounding defeat of scarcity and corporate capitalism at the hands of technology, all will be worldwide phenomena. It is likely that, by 2200 if not 2125, the bitter, pointless, and utterly unjust poverty inflicted by accidents of birth and geography will be eradicated.

We do not need a violent fight or heroic efforts to in order to arrive at a fair, post-scarcity world. Technological, historical, and economic trends, in the long term, are on the side of good. Blood does not need to run in the streets, nor do we need some stroke of enormous luck. We merely need to step back and take a rational approach to solving the problems in front of us. We don’t need to be lucky or brutal or unimaginably brilliant to overcome the (admittedly, quite serious) problems facing us; we merely need to be rational and somewhat intelligent going forward.