The Disruption Algorithm

I’ve observed that business processes tend to follow three phases.

  • innovation: an opportunity is found or a problem is solved.
  • refinement: improvements are made. The process becomes cheaper, more reliable, and generally better.
  • externalization: there are few identifiable improvements to be made. Value capture can possibly be increased, and opportunities to externalize costs are exploited.

In the refinement stage, there are still plenty of opportunities to reduce costs and improve value capture (or yield) without anyone getting hurt. A process that took six months, when performed the first time, might be reduced to two. Genuine waste is getting cut. It’s not zero-sum. However, returns diminish in the refinement stage as the process improves, and the available gains are made.

At this point, talented and creative people look for new opportunities to innovate. Average or risk-averse people look for other processes to refine, and that’s fine: we need both kinds. Vicious and malignant people, however, start making false refinements that externalize costs and risks. Pollution increases, companies become more brittle, and ethics decline. The world becomes sicker, because in the cops-and-robbers game that exists between cost externalizers and regulators, there are just far more of the former. That’s how one gets corporate processes like employee stack ranking and, in software, the bizarre cult that has grown up around two-week “sprints” (as an excuse that allows management to demand rapid but low-quality work). These appear to yield short-term gains, but externalize costs within the company, diminishing the quality of the work.

Most of the sleazy activities for which business corporations are (justifiably) despised are performed in the externalization phase, when the company starts running out of viable refinements and declines to innovate. It may not be (and probably is not) the case that true improvements cease to be possible, at this point. What seems to happen is that there is a shift in political power between those who seek genuine improvements and those who mercilessly externalize costs. Once this happens, the latter quickly and often intentionally drive out the former. They are often able to do so because genuine improvements to processes usually take time to prove themselves, while cost externalizations can be devised that show profits immediately.

It is, at this point, no longer novel to point out that venture-funded startups have ceased to back genuine technical innovation and have, instead, become a continuation of the process (starting in the 1980s) by which the staid corporations and bilateral loyalty of yesteryear have been replaced by quick gambits, degraded working conditions, and a lack of concern by these new companies for their effects on society. This is what “disruption” often looks like.

The realization that I’ve come to is that Silicon Valley, by which I mean the aggressive and immediate financialization of what purports to be technological innovation, is now deep into the third phase. It still “changes the world”, but rarely in a desirable way, and most often by externalizing costs into society in way that regulators haven’t yet figured out how to handle.

While Silicon Valley is marketed, especially to “the talent”, as an opportunity for people to break free of old rules and take on established interests, it’s actually better thought-of as a massive and opaque, but precisely tuned, genetic algorithm. The population is the space of business processes, up to the scale of whole companies. The mutation element occurs organically, due to inexperience and deflected responsibility– when rules are broken or scandals occur, a 23-year-old “founder” has plausible deniability through ignorance that a 45-year-old financier or executive wouldn’t have. The crossover component is far more destructive: corporate mergers and acquisitions. In this way, new business processes are generated in a way that is deliberately stochastic and, when seemingly cheaper processes are discovered, they’re typically snapped into existing businesses, with losses of jobs and position to occur later.

In the three-phase analysis above, Silicon Valley had its innovation phase in the 1970s and ’80s, when new ways of funding businesses, and looser attitudes toward risk, began to take hold. Business failure in good faith was destigmatized. No doubt, that was a good thing. The refinement phase began in the late 1980s, brought with it the golden age of the technology industry in the 1990s, and ended around 2005. Silicon Valley is now in the externalization phase, exemplified most strongly by Y Combinator, an incubator that openly champions the monetization of reputation, self-indulgent navel-gazing, disregard for experience (a polite way of saying “ageism”), and a loose attitude toward executive ethics. The genetic algorithm of Silicon Valley still operates but, in the Y Combinator era, it no longer produces more efficient business processes but, rather, those that are cheaper and sloppier.

The innovation and refinement phases of Silicon Valley are long gone. Cost externalization– the replacement of trusted corporations with good benefits by a “gig economy” culture of itinerancy, the pushy testing of regulations by founding billion-dollar companies that flout them, and the regression into a 21st-century Gilded Age– has replaced the old Silicon Valley and driven it out. This makes it the responsibility of the next generation’s innovators to come up with something entirely new. It is clear that Paul Graham and Y Combinator, as well as those who’ve subscribed to their culture of flash over substance, will have no place in it.

Card counters

In the world of casino gambling, card counters are legendary. Most “systems” promising to make it possible to beat casinos are ludicrous. However, blackjack, if played very well, can be winnable by the player. For every successful counter, there are probably hundreds who fail at it, because one mistake per hour can annihilate even an optimal player’s edge. Casinos love the legend of the card counter, because it encourages so many people to do it ineptly, and because there’s a lot of money to be made in selling books on the subject. They don’t love actual card counters, though. Those get “burned out”. Casinos share lists of known counters, so it’s pretty typical that a too-skillful player will be banned from all casinos at approximately the same time, and therefore lose this source of income.

There’s danger involved, as is documented in Ben Mezrich’s Bringing Down the House. In Vegas, they’ll just ban you for counting. Shadier outfits in other jurisdictions will do a lot worse. It’s important to note that card counters aren’t, in any sense of the word, cheating. They’re skilled players who’ve mastered the rules of the game and disciplined their minds well enough to keep track of what is happening; nothing less, and nothing more. Even still, they face physical intimidation.

Lousy players are money-makers, and good players are seen as costs. How damaging are good players to a casino’s economic interests? Not very, I would imagine. Card counting is legitimately hard. Don’t believe me? Try it, in a noisy environment where cards are dealt and discarded rapidly. Of course, most people know they will lose money, because gambling is a form of entertainment for them. Casinos will always make money, but it’s not enough to have 98 percent of the players be lousy ones. It’s better to select lousy players exclusively and toss the skilled ones out. “You’re too good for us. Don’t come back.”

In other words, the possibility of earning an edge through skillful play is used as a lure. Most people will never acquire such skill, and casinos can hardly be faulted for that.

Play too well, however, and you won’t have a spot at the table. Lousy players only. Sure, you can say that you beat the system. It might make for interesting discussion at a party, but your playing days are over. You’ve won, now go away.

SetBang 3: Fast(er) math in SetBang

SetBang, even when limited to the finite world, is inherently “inefficient”. We might like to live in a world where straightforward set operations can be performed quickly. Unfortunately, that’s not the case. Short programs with no obvious infinite loops can still take a long time.

Let’s restrict ourselves to hereditarily definite sets. A definite set in SetBang is that is derived from other finite sets, and hereditarily definite means that it’s a definite set whose elements are all (hereditarily) definite sets.

There are many finite and probably-finite sets that aren’t definite. For example, if we admit this macro:

:macro magic #003<[003<[~#1?(_\_\_2>'2>,__2>_12>0)]_2>(4<~5<2>/4>'4>_,4<'4>_)]_;~22>?(__0,_2003<[003<[~#1?(_\_\_2>'2>,__2>_12>0)]_2>(4<~5<2>/4>'4>_,4<'4>_)]_;[~~03>[\3<~3<[\_2>{'"}2>]_4<[~3<~4<2>4<.3>&{'"}]_3>2>]__3<~4>~3<~4<2>4<=(;;,_2>~3<~4<2>4<-3>2>-(~{}3<~4>{~3>2>~4>?(_",__0)}\2>[\3<&2>]_;(~"4<2>-3>2>-1[~3<~4<2>4<.3>&{'"}]_[~3<~4<2>4<.3>&{'"}]_,___0),_)(_1))(_~3<~4<2>4<(03>~{}3<~{}3<-#'[\_#3<~3<~4>[\_2>{'"}2>]_~5<~6>~3<~4<2>4<=(;;,_2>~3<~4<2>4<-3>2>-(~{}3<~4>{~3>2>~4>?(_",__0)}\2>[\3<&2>]_;(~"4<2>-3>2>-1[~3<~4<2>4<.3>&{'"}]_[~3<~4<2>4<.3>&{'"}]_,___0),_)(_1))(_3<~4>6<2>/5>4<2>~3<~4<2>4<-3>2>-(~{}3<~4>{~3>2>~4>?(_",__0)}\2>[\3<&2>]_;(~"4<2>-3>2>-1[~3<~4<2>4<.3>&{'"}]_[~3<~4<2>4<.3>&{'"}]_,___0),_)3<,__3>)]_;,2>)(__1[~3<~4<2>4<.3>&{'"}]_,____00),___10)]_)

we can create these sets:

:macro mersenne ${^\_#~:magic:(_",__0)}

:macro fermat ${^^'~:magic:(_",__0)}

:macro maybebot :fermat:`5_`1; 

What does the magic macro do? Its behavior is:

... N -> ... (1 if N is prime, 0 if N is composite).

As a result, mersenne produces the set of Mersenne primes, which is possibly infinite although that remains unknown, and fermat produces that of Fermat primes, of which there are probably exactly 5 (although there may be infinitely many). The set maybebot, derived from fermat, is nonterminating empty (“bottom”) if there are only 5 Fermat primes, and a one-element set if there are 6 or more. The last of these is a set that is trivially, provably, finite; but its derivation (ultimately from $) makes it a risk of non-termination. It’s finite, but not definite.

If we remove $ from SetBang, we can make every set not only definite but hereditarily definite (meaning that the elements are definite sets as well). We still have the potential for non-terminating computations due to [] loops, but otherwise, even if we use {}, everything will terminate.

Is SetBang still “inefficient”? Yes. So let’s talk about what an efficient SetBang would look like, and assume that we’re only working hereditarily definite sets (i.e. there is no $). An efficient SetBang would:

  • canonicalize sets so that set equality checks (=) can be performed in bounded time as a function of the size of the set.
  • perform membership checks (?) in bounded time.
  • represent structured large sets (e.g. 8^#^, the set of all subsets of {0, …, 255}) in an efficient way that doesn’t require storing the entire set.
    • The example above has 2^256 elements. We can’t store it in its entirety but, because it’s a highly structured large set, we can easily check for membership in it.
  • perform primitive operations (namely, \and /) in constant or logarithmic time.
  • “execute” {} comprehensions in a way (strict or eager) that doesn’t take too much time or space upfront, but that allows the resulting sets to be used efficiently, assuming that the code inside the comprehensions runs efficiently.

Is it possible, on some physical device possibly unlike an existing computer, to make an efficient SetBang, even excluding non-definite sets by removing $? It’s extremely unlikely, as we’ll see below.

Before we do that, we note that ordinals are an extremely inefficient way to represent numbers, so let’s use binary encoding to represent them as sets of ordinals instead. (An implementation, of course, can handle small finite ordinals as efficiently as if they were constants, urelements, or near equivalently, machine integers.) So we represent the number 37 as {0, 2, 5} (where the numerals on the right-hand side are still ordinals) as opposed to {0, …, 36}.

We could go further than binary and use von Neumann indices (or “hereditary binary” representations) defined like so:

I({}) = 0

I({a, b, …}) = 2^a + 2^b + …

This creates a bijection between the natural numbers and the hereditarily finite sets. Letting J be I’s inverse, we’d represent 37 as:

J(37) = {J(0), J(2), J(5)} = {{}, {J(1)}, {J(0), J(2)}} = {{}, {{J(0)}}, {{}, {J(1)}}} = {{}, {{{}}}, {{},{{{}}}}}

Why don’t we do this? In practice, I don’t consider it worth it. The multiple recursion of the index encoding and decoding complicate code considerably. Besides, an implementation can hard-code the first 256 ordinal constants and get the same efficiency (for numbers below 2^256) that we would have as if we treated them as urelements. While we gain efficient storage of certain “sparse” numbers (e.g. 2^(2^(2^(2^7 + 1))) + 23) by using hereditary binary, we complicate our algorithms (by, among other things, replacing single recursion with multiple recursion, requiring unpredictable amounts of stack space) and we still can’t do a lot with them. It does not enable us to make tractable operations out of those that would otherwise be intractable (e.g. precise arithmetic on such massive numbers).

With the binary representation, we have a compact addition function:

S∈tBang> :macro addbinary [~3<~4<2>4<.3>&{'"}]_
Stack: {0, 1, 6, 3, 5} {7, 4, 5, 8}
S∈tBang> :addbinary:
Stack: {0, 1, 4, 3, 9}

We see it correctly adding 107 + 432 = 539. The way it works is by repeatedly executing the behavior:

... X Y -> ... (X xor Y) (X and Y) -> ... (X xor Y) {z + 1 : z ∈ X and Y}

until the latter set (TOS) is zero/empty. This propagates the carries in the addition function, and it terminates when there are no more carries. It may not be the most efficient job, but it does the job in polynomial time (of the size of the sets, or the bit-sizes of the numbers).

With this, we can now illustrate concretely that our dream of efficient (as defined above) SetBang is probably not possible. Consider the following code.


with sumbinary (using a loop, but guaranteed to terminate in polynomial time) defined like so:

:macro sumbinary 02>[\3<:addbinary:2>]_

and the former’s behavior is

S X -> 1 if some subset Y of X has Sum(Y) = S, 0 if else.

This is the subset-sum problem, known to be NP-complete. Ergo, if P is not NP, an efficient SetBang implementation is not possible. Even by eschewing $, and restricting ourselves to the most trivial measurement of a set (“Is it empty?”) it will always be easy to write terminating but intractable programs.

In short, one could use laziness to make the {} operator “efficient” and simply not do gnarly computations until needed, and there’s room for cleverness around equality and membership checks (which often don’t require computation of the entire set) but even an empty-check or a \ creates a risk, even in the definite world, of exponential-or-worse time cost.

These results shouldn’t be surprising. Power sets are structured and one can therefore check quickly that, for example, {{{{5}}}} ∈ P(P(P(P(N)))), but they’re still very big. It’s possible, in principle, that SetBang could be implemented in such a way that a large number of complex queries happen efficiently. It’s just not possible, even with $ and [] taken out, to make SetBang fast in all cases. The language is too powerful.

Although there is much FUD around programming languages and speed, SetBang is an example of a language that is literally “not fast”.

Fast(er) math

On Github, I’ve included a library of faster math functions in resources/fastmath.sbg. These operate on numbers that have been converted to binary, like so:

Stack: 37
S∈tBang> :tobinary:
Stack: {0, 2, 5}

If you actually want to convert such numbers back into the more familiar ordinals, you can use frombinary. We’ve seen addbinary, and there isn’t much more to the other arithmetic operators: they’re standard “school book” implementations of subtraction, multiplication, and division-with-remainder on binary numbers. They improve the performance of our arithmetic considerably.

Recall that we previously had an implementation that would take 21 million years to determine that 65,537 was prime. We can now do it in about 30 seconds.

S∈tBang> 00/9'''''''/
Stack: {0, 16}
S∈tBang> :primebinary:
Stack: 1

That’s still a few hundreds of thousands (if not millions) of times slower than, say, trial division in C, but it’s an improvement over what it was.

Here’s a program that (quasi-)efficiently puts the set of all primes on the stack:

S∈tBang> ${~:tobinary::primebinary:(_",__0)}
Stack: {2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53, ...}

In fact, full SetBang allows one to put nearly any describable mathematical object on the stack. You can’t always do much with it, of course. Depending on what it is, you might not be able to do anything with it, as it could be:

{n where n is a Gödel number for a proof of theorem T}

which might equal {} when T is provably false (assuming a much smarter implementation of SetBang) but is “bottom” when T is unprovable.

SetBang 2: SetBang programming

In Part 1, I introduced SetBang, an esoteric language based on set theory. Here, I’ll write some SetBang programs, and show that it’s Turing Complete.

First, let’s talk about the “mystery operator” that I introduced: *, which is identical to:


Remember that 2> is a swap sequence with behavior ... X Y -> ... Y X. You’ll see it a lot.

What does this * do? Let’s take it apart. It’s a conditional, noting the outer (), and the else-clause is simple: 0. So its behavior is:

... 0 -> ... 0 0.

In the then-case, we ~# TOS and equality-check against 1, and then go into another conditional based on that result. So, if TOS is a 1-element set, we execute _{}{}. The eats the boolean produced by the =:

... {e} 1 -> ... {e}

and then we do two {}{}, leaving us with:

... U(e).

In the else-case of the inner conditional, we have at least two elements.

... {e, f, ...} 0.

We the boolean and ~ the set {e, f, ...} and then use \2>\2> to extract its elements (remember that 2> are swaps) then _the remainder set.

... {e, f, ...} e f

We use . to compute the exclusive-or of e and f, and 2> it out of the way. Then we repeat the process using & for the intersection.

... (e . f) (e & f)

With the {} and swaps, we end up at:

... U(e & f) U(e . f)

It’s probably not clear what this does, so let’s look at how it operates when e = {x}, and f = {x, y}:

... 0 -> 0 0

... {{x}} -> x x

... {{x}, {x, y}} -> ... ({x} & {x, y}) ({x} . {x, y}) = ... x y

Ordered pairs

The set-theoretic ordered pair (x, y) is represented by {{x}, {x, y}}. The % operator builds ordered pairs and the * command destructures them. That’s why they exist in the language but, if they weren’t provided, they could be built from the other operators. They exists largely to make the language more convenient. 

Substitutions like this can, again, be tested at the SetBang repl like so:

S∈tBang> :test %* %(~#1=(_{}{}~,_~\2>\2>_.2>\2>\2>_&{}2>{}),0)
............... All tests passed.

We prefix both expressions with % because we only care about getting identical actions on ordered pairs. (The behavior of * is undefined on sets that aren’t either ordered pairs or empty.)

Using ordered pairs, we can build up linked lists, using {} for nil and (x, y), as designed above, for cons cells. We’ll use this in proving that SetBang is Turing Complete.


SetBang can do arithmetic. Here’s a predecessor macro:

S∈tBang> :macro pred \_#
Stack: {7} {5}
S∈tBang> 6 :pred:
Stack: {7} {5} 5

Here are imperative addition and subtraction functions:

:macro swap 2>

:macro impPlus [:swap:':swap::pred:]_

:macro impMinus [:swap::pred::swap::pred:]_

with the caveat that minus is interpreted to mean limited subtraction (returning 0 when conventional subtraction would yield a negative number). These work, but we have the tools to do better. These macros, after all, use imperative loops. They’re not purely functional and they don’t invoke set theory.

We can get a truer, purer minus using -#, e.g. ... 7 4 -> ... {4, 6, 5} -> ... 3.

How do we get a purer addition function? This can be achieved using ordered pairs:

:macro plus 02>{2>~3>%"}3>_12>{2>~3>%"}2>_|#

How does it work? It uses {}-comprehensions to map over each set:

  • X -> {(0, x) for all x ∈ X}
  • Y -> {(1, y) for all y ∈ Y}

and then a union, followed by a cardinality check, can be used to perform the addition.

Cartesian products aren’t very hard to write either.

:macro prod {2>{2>%"}};

It has a set comprehension within another one. That’s not a problem. Let’s look at how it works. Starting from ... X Y, we go immediately into a loop over the Y-elements. We do a swap and start a loop on the X-elements, and have ... y x. The 2>is a swap, and % makes the pair, and "puts it in a set.

The behavior of the inner comprehension is ... y X -> ... y {(x, y) for all x ∈ X}.

One might expect, then, that the behavior of the outer loop should be ... X Y -> ... (X x Y). It’s close. The thing to remember though is that side effects on the stack that would be expected to move (and destroy) the X, such effects never happen. The stack state that exists when a % is executed is used for each “iteration” of the {} comprehension.

Thus, it’s not possible to populate the stack with a set using a program like {~"}. If you want to do that, you have to use the imperative []-loop, like in this program:[\2>]_, which iteratively applies \ to a set, and then deletes it when it’s empty.

To multiply numbers, there’s a simple program that does the job:

:prod:#, which macroexpands to {2>{2>%"}};#.

Can we divide? Yes, we can. Here’s one way to do it. It turns out to be unbearably slow, but it is mathematically correct:

:macro quot 2>~'3<2>~{3<:times:"}&\_#

Why is it slow? It gives us the following set-theoretic definition of division:

n quot k = #(n’ ∩ k*n’) – 1, e.g. 19 div 5 = #({0, … 19} ∩ {0, 5, 10, 15…}) – 1 = 4 – 1 = 3

Unfortunately, it’s O(n^3)– worse yet, not in the size of the number, but in the number itself. This is not an efficient division algorithm.

In fact, SetBang carries a persistent danger of inefficiency. Why is that? Well, let’s consider what hereditary finite sets are: Rose trees whose nodes contain no information. In Haskell, this could be implemented like what follows.

data Set = Empty | Node [Set]

or, equivalently:

RoseTree (), where

data RoseTree a = Leaf a | Branch [RoseTree a]

An implementation that uses shared data (and, as mine does, exploits numeric constants) is required; otherwise, you’ll have exponential storage just to represent the natural numbers. Given that there is no control over choice order (it’s implementation-defined) it is hard to stamp out the risk of unexpected exponential behavior completely (although we will not observe it in any example here).

One way to make arithmetic faster would be to use a different encoding than the ordinals. One candidate would be to use bit sets (e.g. 23 = {0, 1, 2, 4}) and write the arithmetical operators on those (as well as conversions both ways). Another would be to use von Neumann indices, where a hereditarily finite set’s index is computed as:

I({}) = 0

I({a, b, …}) = 2^a + 2^b + …

This function I is relatively easy to invert (call its inverse J). For example, we’d represent the number 11 not with {0, 1, …, 10} but with:

J(9) = {J(0), J(1), J(3)}

J(3) = {J(0), J(1)}, J(1) = {J(0)}, J(0) = {}, ergo:

J(11) = {{}, {{}}, {{}, {{}}}}

These sets are far more compact than the ordinals for the same numbers. Arithmetic could be construed to operate on numbers represented in this way, and would then take on a flavor of (much more efficient) binary arithmetic. We won’t be doing that here, though: it’s far too practical.

You can test for primality in SetBang:

:macro not 0=

:macro divides ~3<2>{2>:times:"};2>?

:macro prime ~2-{2>:divides:}:not:

Is it fast? No. It’s horribly inefficient– in the current implementation, it’s O(n^4), and takes 10 seconds to figure out that 23 is prime, so we can expect it to take a couple of days on 257, and 21 million years on 65,537– but it’s correct.

If one wished to put the entire set of primes (lazily evaluated) on the stack, one could do so:


which, in its full macroexpanded glory, becomes:


I don’t recommend doing this. If you’re interested in working with large (6+ bit) prime numbers, I recommend representing the numbers in a more efficient way. Of course, that removes from SetBang its delicious impracticality, and suggests that one might use other languages altogether when one needs to work with large primes like 47.

The fact that there are five {}-comprehensions in the macro-less form above suggests that it is O(n^5) to compute the nth prime, and that’s about correct. (It’s slightly worse; because the nth prime is approximately n*log(n), it’s O(n^5*(log n)^4).) This might be one of the few ways in which SetBang is readable: nested {} comprehensions give you an intuitive sense of how cataclysmically inefficient your number-theory code is. And remember that n here is the number itself, and not the size (in, say, bits or digits) of the number.

Data structures

With % and * we have the machinery to build up linked lists. Let’s do that. This language obviously isn’t the best choice for number theory.

Let’s write some macros.

:macro BIG 3^^\_#

:macro u *2>~:BIG:?(__:BIG:,_')2>%

:macro d *2>\_#2>%

:macro l %

:macro r *

:macro p *2>~!2>*

:macro g *2>_@2>*

:macro w *2>[2>%

:macro x *2>]2>%

What do these do? Well, the first one, BIG, simply puts the number 255 (2^(2^3) – 1) on the stack. That’s the maximum value of a single unsigned byte.

Except for l, the remaining macros assume that TOS will be an ordered pair or {}, so let’s consider what might make that always true, in light of the other operators. To take note of an edge case, remember that * destructures an ordered pair, until we get to 0 = {}, when it behaves as ... 0 -> ... 0 0. This might suggest that these macros are intended for an environment in which TOS is always a linked list (possibly {}). That would be the correct intuition, and we can understand the first six operators in terms of their effects on the stack when TOS is a linked list.

  • u : ... (h, t) -> ... (h', t) where h' = min(h + 1, 255)
  • d : ... (h, t) -> ... (h*, t) where h* = max(h - 1, 0)
  • l : ... a (h, t) -> ... (a, (h, t)) and | (h, t) -> | (0, (h, t))
  • r : ... (h, t) -> ... h t and ... 0 -> 0 0
  • p : ... (h, t) -> ... (h, t) with #h printed to console
  • g : ... (_, t) -> ... (c, t) where c is read from console

It’s worth noting the behavior of l and r in edge cases. The edge case of l is when the stack is deficient, noting that % demands 2 arguments. Because SetBang left-fills with {}’s, the behavior is to add a {} to the linked list at TOS. The edge case of r is when TOS is 0, in which case we end up with another 0.

The remaining two macros, wand x, might look a little bit odd. Standing alone, neither is legal code. That’s OK, though. SetBang doesn’t require that macros expand to legal code, and as long as ws and xs are balanced, it will generate legal code. So let’s consider the expansion of wSx, where S is a string of SetBang code. The behavior of wSx, then, is a SetBang []-loop, but with the head of TOS (rather than TOS itself) being used to make the determination about whether to continue the loop.

We can now prove that SetBang is Turing Complete. Brainfuck is Turing Complete, and we can translate any Brainfuck program to a SetBang program as follows:

  • Start with a 0,
  • replace all instances of +, -, >, <, ., ,, [, and ] with :u:, :d:, :l:, :r:, :p:, :g:, :w:, and :x:, respectively. 

Therefore, if for some reason you wish not to write in SetBang, you can always write your program in Brainfuck and transliterate it to SetBang!

This proves that SetBang is Turing Complete, but that shouldn’t surprise us. It’s a fairly complex language, using every punctuation mark on the keyboard as a command. Powerful commands like % and * feel like cheating, and clearly the numerical commands aren’t all necessary: we can always write 0'''' instead of 4, for example.

So how much can we cut and still have a usable language? Which operators are necessary, and which ones can we do away with? And as we cut away more and more from the language, what does the code end up looking like? This is what we’ll focus on in Part 3.


SetBang 1: Toward a more unreadable esolang

SetBang (or S∈tBang) is a language based on the foundation of all mathematics: set theory. It’s esoteric by design, so I wouldn’t use it for anything where performance, readability, or programmer sanity were requirements. However, it’s probably great for job security.

Getting SetBang

You’ll need Leiningen (Clojure’s package manager) to run SetBang. SetBang lives on this repository.

“Hello, World!” in SetBang.

The first thing one does in any programming language is write “Hello, World!”. In SetBang, that can achieved as follows.

  1. Open a new file called helloworld.sbg.
  2. In that file, type in the following short program:6^#~8[2>'2>\_#]_!5^#[2>'2>\_#]_~5[2>'2>\_#]_!~9'''  [2>'2>\_#]_~~!!3[2>'2>\_#]_~!5^#~9'''[2>'2>\_#]_!!~8    ][2>'2>\_#]_!~!~3[2>'2>\_#]_!~3[2>\_#2>\_#]_!9''[2>\_#2>\_#]_!5^#'!9'!___
  3. Run ./setbang helloworld.sbg at the command line.

Your output should be:

Hello, world!

If you don’t feel like typing that, the program is provided at resources/hello_nomacro.sbg.

It may not be clear, from the code above, what SetBang is doing. In fact, that particular program doesn’t use any set theory at all. Don’t worry: both the language and the theory behind it will become clearer over time.

The Stack

Every SetBang program runs with a stack. This is the most accessible form of state in SetBang. To see how it works, run ./setbangat the command line. Turn :numeric(a configuration option) off like so:

S∈tBang> :numeric off

Right now, the stack is empty. We introduce the first fundamental operator of SetBang: 0. It’s a program that pushes a value, the empty set, on the stack.

S∈tBang> 0
Stack: {}
S∈tBang> 000
Stack: {} {} {} {}

We’ll describe this as 0 having the behavior ... -> ... 0. The program 000 has the behavior ... -> ... {} {} {}. It puts 3 empty sets on the stack.

Empty sets aren’t terribly exciting, so we’ll use another command, /, whose behavior is ... X e -> ... (X ∪ {e}).

S∈tBang> /
Stack: {} {} {{}}
S∈tBang> /
Stack: {} {{{}}}
S∈tBang> /
Stack: {{{{}}}}

If you want to drop the top-of-stack (“TOS”) use _, whose behavior is ... X -> .... You can duplicate TOS with ~, whose behavior is  ... X -> ... X X.

S∈tBang> _
Stack: {} {}
S∈tBang> /
Stack: {{}}

There’s an additional command ', which behaves identically to ~/. Its behavior is ... X -> ... X  ∪ {X} . It’s not strictly necessary, but it can make SetBang coding easier, so it’s included.

S∈tBang> _
S∈tBang> 0'
Stack: {{}}
S∈tBang> ~/
Stack: {{}{{}}}
S∈tBang> '
Stack: {{}{{}}{{}{{}}}}

One might be tempted to ask: so what are these things sets of? That’s the beauty of SetBang: just sets! The only values in SetBang are sets. It’s the ultimate unityped language, as there are no urelements at all. SetBang lives strictly within the Von Neumann universe.

Of course, this can get unwieldy. For example, execute the program  ''' on the stack above, and you’ll get:

S∈tBang> '''
Stack: {{}{{}}{{}{{}}{{}{{}}{{}{{}}}}{{}{{}}}}{{}{{}}{{}{{}}}}{{}{{}}}{{}{{}}{{}{{}}{{}{{}}{{}{{}}}}{{}{{}}}}{{}{{}}{{}{{}}}}{{}{{}}}}}

That’s not very human-readable. It turns out that a certain class of sets get a lot of use. They’re the natural numbers, defined like so:

  • {} is a natural number and often called “0”.
  • if n is a natural number, then n ∪ {n}, also known as n’ (or S(n) or n+1), is a natural number.

If you want your natural numbers visualized in the more familiar notation, use :numeric on.

S∈tBang> :numeric on
Stack: 6
S∈tBang> '''
Stack: 9

Because SetBang is unityped, we’re going to use 1 = {0} = {{}} as synonymous with “True” and 0 = {} as synonymous with “False”.

There are commands 1, 2, … , for putting some of the more commonly used natural numbers on the stack. However, don’t be fooled into thinking of them as values. Values don’t exist in SetBang, and there are no literals. They’re commands that do specific things, and 10 does not place the number 10 on the stack, because it’s a followed by a . Instead, it places a {{}} (“1”) and a {} (“0”) on the stack. We’ll discuss tools for creating large numbers later.

S∈tBang> 10
Stack: 9 1 0

The other numerical commands aren’t necessary to write SetBang, of course. One can always use 0'''instead of 3, for example.

Another operator of interest is the cardinality operator, #, whose behavior is ... X -> ... #X where #X is the size of X. Let’s build up the set {2, 3, 5, 7}:

S∈tBang> 02/3/5/7/
Stack: 9 1 0 {7, 3, 2, 5}

We use #and we get back 4. Then, to show that it’s identical to {0, 1, 2, 3}, we use ?, whose behavior is ... X e -> (1 if e ∈ X, 0 if else). That tells us that 2 ∈ 4, as we expect.

S∈tBang> #
Stack: 9 1 0 4
S∈tBang> 2?
Stack: 9 1 0 1

Crucial in stack manipulation are the > and <commands, which behave as follows.
> : ... X1 X2 ... Xk K -> ... Xk X1 ... Xk-1 where k = #K (i.e. K's cardinality).

< : ... X1 X2 ... Xk K -> ... X2 X3 ... Xk X1

Thus, for example >would have the behaviors:

... X Y 2 -> ... Y X

... X Y Z 3 -> ... Z X Y

... X Y Z W 4 -> ... W X Y Z

The 2>pattern is commonly used to swap the top two values on the stack and, of course, 2<has the same effect.

S∈tBang> 2>
Stack: 9 1 1 0
S∈tBang> 3<
Stack: 9 1 0 1
S∈tBang> 4<
Stack: 1 0 1 9
S∈tBang> 7 3
Stack: 1 0 1 9 7 3
S∈tBang> 6>6>5<
Stack: 7 1 0 1 9 3
S∈tBang> >
Stack: 7 1 9 0 1

There’s a command ;whose behavior is ... X Y -> ... Y. If it weren’t provided, it could be simulated with 2>_.

For one last thing before getting into the theory that motivated the language, I’ll introduce conditional execution. The block (code)executes the program in codeif and only if TOS is non-empty, and (then,else)will execute the program thenif TOS is non-empty and elseif TOS is an empty set.

S∈tBang> (___34,9)
Stack: 7 1 3 4
S∈tBang> 0(9,___)
Stack: 7 1
S∈tBang> (80)(9)
Stack: 7 1 8 0

How much can we do with what we have so far? Surprisingly, a lot. However, let’s motivate the language with some set theory by introducing more operators, those justified by the ZFC system of set theory.

ZFC and SetBang

I won’t reiterate the axioms, but I’ll explain what they mean and what they contribute to SetBang.


The Axiom of Extensionality defines what it is for sets to be the same. Two sets are equal if they contain the same elements. If they are equal, they’re members of the same sets. This also implies the nonexistence of urelements by stating that there’s only one thing (namely, the empty set) that has no members.

This is realized in SetBang with the operator =. Its behavior is:

... X Y -> ... (1 if X = Y, 0 if X /= Y)

There is one caveat: equality checking can take a long time. Specifically, it can take infinitely long, as we’ll see when we deal with infinite sets. It behaves well on finite sets, though.

S∈tBang> 02/3/5/7/ 4
Stack: {7, 3, 2, 5} 4
S∈tBang> =
Stack: 0

Of course, {7, 3, 2, 5} is not equal to 4, so this returns false/0. Equality is independent of ordering, hence:

S∈tBang> _
S∈tBang> 02/3/5/7/ 07/3/2/5/
Stack: {7, 3, 2, 5} {7, 3, 2, 5}
S∈tBang> =
Stack: 1


Pairing is the quintessential building axiom in ZFC set theory. It says that for any sets x, y, there is also a set {x, y} containing exactly them (and no other elements). Pairing is implemented using the +command, with behavior:

... X Y -> ... {X, Y}.

For user convenience, we have some related commands:

" : ... X -> {X}

% : ... X Y -> {{X}, {X, Y}}

Both of these can be defined in terms of +:

" is ~+

% is 2>~~+3>++

The latter of these may be hard to believe. One can work it out at the REPL.

S∈tBang> 3 8 %
Stack: {{3}, {3, 8}}
S∈tBang> 3 8 2>~~+3>++
Stack: {{3}, {3, 8}} {{3}, {3, 8}}
S∈tBang> =
Stack: 1

For more assurance, one can try it out using QuickCheck-like generative testing, via the :testdirective.

S∈tBang> :test % 2>~~+3>++
............... All tests passed.
Stack: 1

This gives us a high degree of confidence that the programs behave identically. To see a case of failure, let’s do one that ought to fail:

S∈tBang> :test #62>? _1
Test #9 FAILED!

Stack was: Stack: {{{0, 3}, 2}, 4} {4, {0, 5}} {7, 2} 9
Program #1: #62>?
Result #1 Stack: {{{0, 3}, 2}, 4} {4, {0, 5}} {7, 2} 0

Program #2: _1
Result #2 Stack: {{{0, 3}, 2}, 4} {4, {0, 5}} {7, 2} 1

Noting that 2>is a swap and that ?is comparison on natural numbers, the behavior of 6#2>? is:

... X -> ... (1 if #X < 6, 0 if #X > 6)

This is why the first nine tests (numbering starts at zero, of course) pass– that program is identical to _1 in the simpler cases– but #9 fails. Although the complexity parameter is hard-coded right now, I plan to change that.

If you want to test something against the empty string, use:test with one argument, like so.

S∈tBang> :test ~_
............... All tests passed.

Note that ~_ is a no-op; it duplicates TOS, then drops it.


The Axiom of Infinity provides the ZFC system with an infinite set, ω, defined such that:

  • 0 = {} ∈ ω.
  • if x ∈ ω, then x’ = x ∪ {x}.

In SetBang, you can put infinite sets on the stack. The $command achieves that. It places, on the stack, the set ω = {0, 1, 2, 3, …}.

S∈tBang> $
Stack: {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, ...}
S∈tBang> $
Stack: {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, ...} {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, ...}

Much larger infinities can be approached in SetBang. Of course, computers are finite, so doing any work that requires the entire set is impossible. We can use it, still, because it’s realized lazily. Let’s generate the natural number 512 using 9^#(to be explained later) and do a membership check in that infinite set.

S∈tBang> $
Stack: {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, ...}
S∈tBang> 9^#
Stack: {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, ...} 512
S∈tBang> ?
Stack: 1

It’s there. Now, here’s a bit of bad news. In SetBang, ?and =are extremely “dumb”. If you use ?to check for membership in an infinite set and what you’re looking for isn’t there, SetBang will search forever. (It exits quickly in the “true” case). If you’re comparing sets for equality, SetBang will keep working until it finds a difference, and that can only happen if one set is finite. SetBang will quickly resolve $0=(they’re not equal) but $$=, while clearly “true” (ω = ω) will never terminate.

Why is this? Determining equality of sets, at the infinite level, is undecidable in the general case. There are tricks that could be used to make sets “smarter”, but I’ve chosen to make SetBang (designed to be an esoteric language, and not a practical tool) predictably dumb instead of unpredictably smart. We’ll show ways to get around this, later on.

Specification, Replacement, and Union

The Axiom Schema of Specification says that any describable subset of a set is also a set. As specified, it refers to logic formulas (free in one variable) like

(∃y)(∃z) (y∈x)⋀(z∈x)⋀~(y=z)

which is free in x and corresponds to “x has at least 2 elements.” SetBang doesn’t use symbolic logic per se, but it uses code objects (to be described) in their stead. Since it allows the pulling of subsets, it’s the filter (as in map, filter, and reduce) of set theory.

The Axiom Schema of Replacement says that the image of a set under a function of a set is also a set. This is the map of set theory.

Finally, the Axiom of Union allows one to take the union of a set (the set of all elements of its elements) and call that a set. This is an unordered, limited reduce that exists in set theory.

In short:

S is a set, p is a predicate and f is a function.

Comprehension: {s : s ∈ S and p(s)} is a set.

Replacement: {f(s) : s ∈ S} is a set.

Union: {x : x ∈ s for some s ∈ S} is a set.

We’re able to unify all three of these with the {}comprehension (or set comprehension). If codeis a program whose behavior is ... x -> ... f(x), then {code}has the behavior:

Y1 ... YN X -> ... Union(f(x) for all x ∈ X).

where Y1 Y2 ... YNis the stack at the time the {}is encountered.

Set comprehensions may be lazy, and will have to be so if applied to infinite sets. The stack used (the Yiabove) will be the stack as it was when the lazy set was created, which may be a different time from when its elements are realized. For this reason, it’s best that code sequences put inside {} comprehensions not contain side effects, and we make it a rule that they’re not allowed to. Side effects on the stack will be ignored, and I/O becomes unpredictable.

In a language like Haskell or Lisp, you can simulate map, concat and filterusing the more general mapcat, like so:

map f xs = mapcat (\x -> [f x]) xs

filter f xs = mapcat (\x -> if f x then [x] else []) xs

concat xs = mapcat identity xs

We use the same principles to implement replacement (map), specification (filter), and union in SetBang. If  is code whose behavior ... x -> f(x) then

map F --> {F"}

(Remember that the behavior of is ... X -> ... {X}. For example:

Stack: {{15, 5, 10}, {7, 13, 3, 2, 11, 5}, {0, 1, 4, 9}}
S∈tBang> ~{#"}
Stack: {{15, 5, 10}, {7, 13, 3, 2, 11, 5}, {0, 1, 4, 9}} {6, 3, 4}


filter F --> {~F(_",__0}}

like so with F = 5?, a program returning 1(true) if 5 ∈ X, where X is TOS.

Stack: {{0, 1, 4, 9}, {15, 5, 10}, {7, 13, 3, 2, 11, 5}}
S∈tBang> ~{~5?(_",__0)}
Stack: {...} {{15, 5, 10}, {7, 13, 3, 2, 11, 5}}

Finally, the union is the easiest of all to write:

union --> {}

Stack: {{0, 1, 4, 9}, {15, 5, 10}, {7, 13, 3, 2, 11, 5}}
S∈tBang> {}
Stack: {0, 7, 1, 4, 15, 13, 3, 2, 11, 9, 5, 10}

SetBang comes with built-in operators for the binary union, intersection, difference, and exclusive or of two sets:

| : ... X Y -> ... (X ∪ Y)

& : ... X Y -> ... (X ∩ Y)

- : ... X Y -> ... (X - Y)

. : ... X Y -> ... (X - Y) ∪ (Y - X)

If, however, these didn’t exist, they could be written (with one caveat, explained below) using set comprehensions:

| <--> +{}

& <--> {2>~3<~3>?(_",__0)};

- <--> 2>{2>~3<~3>?(__0,_")};

. <--> 2>~3<~3>2>{2>~3<~3>?(__0,_")};3>2>2>{2>~3<~3>?(__0,_")};+{}

In truth, it’s much better to use the built-ins, and here’s why. {}-comprehensions become strange when infinite sets are involved. One will get identical behavior from 0$& and $0&, but the same is not true of the intersection program written above. Let’s look at this using the macro system (which will be featured heavily in future posts).

S∈tBang> :macro inter {2>~3<~3>?(_",__0)};
S∈tBang> $ 0 :inter:
Stack: 0
S∈tBang> 0 $
Stack: 0 0 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, ...}
S∈tBang> :inter:

.... hangs forever!

As I said, SetBang prefers to be predictably dumb in handling infinite sets, rather than unpredictably smart. However, the implementation of & is smart enough to know that set intersections commute and to avoid looping over an infinite set when possible. Set comprehensions don’t do this, so 0 $ :inter: triggers an infinite loop (over $).

Laziness can usually rescue us, but the REPL demands elements. The problem with the set computed by 0 $ :inter:is that, while empty, it produces no elements. It’s equivalent to filtering, with \_ -> False, from an infinite stream. The result is an empty stream, but the computer is not usually smart enough to figure it out (and, in fact, the general problem is mathematically undecidable). We call this ⊥ (pronounced “bot”). It’s equal to the empty set, but the computer will never know that it’s equal to the empty set: it will try, fruitlessly, to find elements forever.

The &command is smart enough to reorder arguments when it reduces the likelihood of producing a ⊥, while set comprehensions aren’t.

Power Set

The Axiom of the Power Set allows one to produce, for a given set X, the set of all of its subsets. The command for this is ^, with behavior ... X -> ... P(X).

S∈tBang> 3
Stack: 3
S∈tBang> ^
Stack: {0, 1, {2}, 3, 2, {1}, {1, 2}, {0, 2}}

These are the 8 subsets of 3 = {0, 1, 2, 3}.

Laziness comes into play at a certain size (currently, around 2^16 elements) because power sets can be very big.

S∈tBang> 5^#
Stack: 32
S∈tBang> ^
Stack: {0,1,{1},2,{2},{0, 2},{1, 2},3,{3},{0, 3},{1, 3},{0, 1, 3},{3, 2},{0, 3, 2},{1, 3, 2},4, ...}
S∈tBang> _$
Stack: {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, ...}
S∈tBang> ^
Stack: {0,1,{1},2,{2},{0, 2},{1, 2},3,{3},{0, 3},{1, 3},{0, 1, 3},{3, 2},{0, 3, 2},{1, 3, 2},4, ...}
S∈tBang> ^
Stack: {0,1,{1},2,{{1}},{0, {1}},{1, {1}},{0, 1, {1}},{2},{0, 2},{1, 2},3,{2, {1}},{0, 2, {1}},{1, 2, {1}},{0, 1, 2, {1}}, ...}

In theory, we can place huge sets on the stack. $^isn’t only infinite but uncountably infinite. That said, SetBang can only generate a finite number of values, so most elements of that set P(ω)– namely, infinite subsets of ω– will never be generated.


One historically controversial axiom in set theory is the Axiom of Choice, which says that there exists a “function” f : Set -> Set such that f(X) ∈ X for all non-empty X.

Our choice command has the behavior:

... {} -> ... {} {}

... X -> ... (X - {e}) e where e ∈ X 

Which element does it pick? This is where SetBang gets weird and hilariously impractical: in this implementation, it’s nondeterministic. (It is not, however, random.) You can’t, in general, rely on elements being chosen in the same order. Why is it this way? Due to our needs as finite creatures, how a set is computed determines what we can observe. (This is the difference between {} and ⊥. The latter takes an infinite amount of time for one to figure out that it’s empty.) The easiest way to impose a choice function on sets is to create an ordering on them and, as a matter of rule, choose the least element. However, as implemented, infinite sets are processes that generate elements and one may never know when the least element has been generated.

I might remark that this presents a difference between an idealistic notion of choice (a function that exists platonically) and a practical desire, when possible, to produce elements quickly. SetBang favors the latter. Iterated choice will produce new elements in some order (until the set is empty).

For convenience, there’s also a choose-many command ` whose behavior is:

` : ... X K -> (X - Y) Y where #Y = min(#K, #X) and Y ⊆ X

It’s used like so, to choose 9 elements from 65536 = {0, …, 65535}:

S∈tBang> 4^#^#
Stack: 65536
S∈tBang> 9`;
Stack: {0, 60363, 20083, 25535, 39036, 10859, 41408, 65065, 50300}

The mathematical Axiom of Choice is not actually necessary for SetBang’s choice operator to be allowed, insofar as only finitely many choices can actually be made, computationally speaking, and Choice isn’t required for that. So, to be pedantic, it’s technically {\"}that requires the Axiom.


The Axiom of Regularity is one of my favorite set-theoretic axioms. The others all define ways to construct sets, while Regularity imposes an upper bound on what can be sets. Strictly speaking, it says that every non-empty set contains an element that is disjoint from it. This prevents cycles in the membership relation (e.g. a ∈ a or b ∈ c ∈ d ∈ b) that would raise paradox-inducing questions around “set-ness”. Luckily, we don’t have to worry about that in SetBang: there is no way to produce a set that violates the Axiom of Regularity.

The rest of the language

We’re already Turing complete. We can express general recursion in {} comprehensions, and we can get things out of sets, but so far we’ve been writing programs at the REPL that don’t actually do anything. If you run a file consisting of 00/ at the command line, you will compute the set {0}, but not actually do anything with it.

SetBang, therefore, comes with two I/O operators: and  @. Their behavior is as follows:

! : ... X -> prints byte valued at min(#X, 255)  to stdout

@ : ... -> ... C -> where C is the natural number corresponding to one byte read from stdin

It might be tempting to use this in conjunction with set comprehensions, but that’s not a good idea, because set comprehensions are lazy, making it unpredictable when that I/O will happen. (In fact, it’s not legal SetBang to do I/O in a set comprehension.)

Imperative, eager loops can be realized using square brackets ([]). The behavior of is, if TOS is empty, to skip the block entirely. If TOS is not empty, then the program code is executed and TOS is checked again, with execution repeating until TOS is empty.

Thus, this program is an “echo” that will take one character from stdin and echo it to stdout. It never terminates.


This program will put the bytes 10, 9, 8, …, 1 to the console.


The behavior of \_#, please note, is comparable to decrement (except on 0, when it returns 0 because there is no -1).

What else?

There are 42 characters that can serve as SetBang operators. We’ve seen 41 of them.

  • for putting empty sets on the stack. Also: 1, 2, … , 9 as macro-like replacements for 0', 0'' and so on.
  • and ?for equality and membership testing, returning 1on true and 0on false.
    • e.g. ... {2, 4} 4 -> ... 1under ? since 2 ∈ 4 = {0, 1, 2, 3}.
  • + for pairing (... X Y -> ... {X, Y}).
  • 'for the increment operator (... X -> ... X ∪ {X}).
  • "for “quoting” a value into a set, (... X -> ... {X}).
    • Equivalent to ~+.
  • to pop from the stack (... X -> ...).
  • ~ to duplicate TOS (... X -> ... X X).
  • < and for rotations on the top of the stack,
    • e.g. 3>is a program whose behavior is (... X Y Z -> ... Z X Y).
  • as shorthand for 2>_ (“swap-drop”) with behavior ... X Y -> ... Y.
  • {} for set comprehensions, which build sets based on existing sets.
  • &,|,-, andare for intersections, unions, differences, and exclusive-or respectively. 
  • (“choice”) and its (right-)inverse and the iterated choice `.
    • \ : ... X -> ... (X - {e}) {e}
    • / : ... X Y -> (X ∪ {Y})
    • and then, e.g., ... {2, 3, 5, 7} 2 -> {2, 11, 5} {7, 3}under `.
  • ^which creates the power set (e.g. ... X -> ... P(X)).
  • #for replacing TOS with natural number according to cardinality.
    • ... X -> #X works reliably when X is finite.
    • when X may be infinite, you’ll get a lazy possibly-infinite set.
    • does not handle transfinite cardinalities. It doesn’t detect uncountability, for example.
  • () and for conditional execution based on TOS.
    • (^'#) executes the program ^'# if TOS is non-empty.
    • (^'#,_1) is similar but executes _1 if TOS is empty.
  • $to push an infinite set {0, 1, 2, …} on to the stack.
  •   for output and for input. Both convert bytes to natural numbers between 0 and 255 inclusive.
  • [] for looping.
    • [\_#] will execute the program \_# until TOS is empty.
    • 1[] will loop forever (doing nothing).
  • : for macro invocation.
    • Canonical example is :macro swap 2>. Then :swap:_ -> 2>_.
  • % for `… X Y -> {{X}, {X, Y}}, a sort of special pairing.
    • Could be implemented as 2>~~+3>++.

There’s one more (using the character *) which, although useful, is also not necessary as a primitive of the language. If we wanted to build it, we could represent it as:


For now, I’ll leave this “mystery operator” unexplored, but we’ll come back to it in the next installment. It turns out to be very useful.

Going forward

So, that’s the language. There are many questions to ask about it, though, such as:

  • is it possible to make infinite sets “smarter”? For example, the program ${'"}0searches for 0 in the set {1, 2, 3, …}. Because SetBang is deliberately dumb in interpreting infinite sets, that program will search forever. Is it possible to make it smarter?
  • Full SetBang has a lot of redundant commands. Is it possible to strip some of them out of the language? What’s the minimum possible SetBang?
  • How should we handle error conditions (e.g. stack underflow)?
  • Can we get better at handling larger ordinals like ω+1, ω*2, ω^ω and the like?
  • What do useful programs in SetBang actually look like?
  • SetBang is slow. Is it possible to make it faster?
  • Is this language Turing complete; that is, as powerful as any other programming language? It turns out that the answer is Yes, and I’ll prove it in Part 2.

What I Fought For

Before reading this, please read Jeff Vogel’s excellent essay, “How I Deal With Harassment, Abuse, and Crazies In General“. His experiences, as one who has encountered both the positive and negative aspects of publicity, are eerily similar to mine.

“A Reputation Problem”

To put things in context, I’ll recount some recent happenings.

This past February, I left a position as an R&D Director at a “unicorn” startup. Why? A Fortune 1000 company wanted to build an applied logistics lab in downtown Chicago, and had selected me to be its VP of Engineering, in charge of a 7-figure hiring budget.

That turn of events was much of why I removed many of my old blog posts. It’s not that I don’t enjoy writing. I do. When you’re an executive, though, you have to be extremely careful in what you say, because you’re going to have a large number of people watching your every move and fretting about whether it will affect their jobs. Pulling out of the blogging game seemed the prudent thing to do, at least temporarily, for the sake of the people who’d be reporting to me.

This company’s “Labs” division was set up to open on March 28. It didn’t. Why? Out of respect for the privacy of others involved, I’m not going to get into the details. It was a mix of others’ life events and mundane difficulties inside the parent company. I’m not even inclined to complain. Others, who pulled up stakes to move here for the project, were hurt more by the closing (or, more accurately, the never-opening) than I was.

During February and March, that project’s fate became more uncertain. I started talking to recruiters, in order to figure out what I would do if the project were scuttled.

I managed to get myself introduced to someone whom the top hedge funds in Chicago and New York tap when they need CTOs and partner-level hires, and he’s known for speaking candidly. Here’s what he said (emphasis mine):

If it were anyone else with your talents, I could place you pretty much anywhere. 300, 400 grand a year base, direct mentorship from people like [billionaire hedge fund manager] and [another billionaire hedge fund manager]. You’d be one of the best people they’ve seen in years, and unlike the people around them, you’re honest and ethical. The thing that’s holding you back is… a reputation problem. People Google you and think, “This guy’s going to start a union.”

This is going to make it hard to place you. Guys like [name] don’t like people they can’t buy. If I were you, I’d start blogging again and come out against unions. Do that for six months, and I can put you anywhere you want to go.

I’m not going to come out against unions. Collective bargaining is a complicated topic. I could gain, economically, in voicing a simplistic view; but it wouldn’t be intellectually honest to do so.

I might as well take this opportunity to explain what I do and do not believe.

Brass tacks

“A reputation problem” is a polite understatement. There’s a misconception about me, which is that I attempted to unionize Google. Nothing of the sort happened. Nonetheless, several of top internet search hits for my name are people attacking me, mostly, for reasons that stem from the events that have been misconstrued in this way.

With all of this misinformation out there, employers might see me as a radical unionist. In reality, I’m an intellectually honest (and harmless) moderate who’d rather just do his job. I like doing things that I’m good at, and I’m extremely good at my job. On the other hand, I’ve never organized a union before and don’t know if I’d be any good at it.

Silicon Valley is aggressively opposed to unions. Companies routinely share lists of suspected unionists, and people who end up on these lists can be blacklisted, assigned to the worst managers if “discovered” once inside the company, harassed in public, and subjected to all sorts of libel.

For example, I’ve had people whom I’ve never met lie about my performance at Google and the circumstances under which I left. I’ve been subjected to two high-profile, bad-faith website bans (Hacker News, Quora). When I started an “Ask Me Anything” thread on Reddit, hoping to keep the discussion confined to my technical interests, I faced a voting ring comprised of at least 45 accounts (that is the number that were detected by Reddit; it could be higher) that drowned out substantive discussion. The top comment asked me why I was a “mega-douche”. It seems that tech thugs, while dedicated to their work, aren’t very creative with their insults. (If these people ever need a competent writer, I happen to “know a guy”.)

I’m not even “a unionist”. I just choose not to reject the idea out of hand. Because of my mathematical training, I don’t reject many ideas out of hand; I wait for proof or refutation before assuming certainty. I’m philosophically agnostic. Yet, for having the intellectual honesty to consider that collective bargaining might be worth consideration, I’ve been attacked, over and over again for years, by some of the worst people in Silicon Valley.

Now, am I going to walk into my next job and start a union? No. I doubt that have the organizational skills for it. We’re talking about programmers. I’d call that a cat-herding task, but cats are prettier and less vicious, in my experience, than most private-sector software people. Do I abstractly believe that software programmers have the right to some form of collective bargaining? Sure, but I’m not planning to build it at my next company. I’d rather build great products and solve hard problems, that being more in line with my core competencies.

So what do I believe?

First of all, I believe that collective bargaining is a right. Corporations are collectives that bargain on behalf of the rich people who own the means of production. The workers ought to have similar backing. It’s only fair. In fact, most respected professional organizations are, in fact, labor unions under a more genteel name. The American Medical Association is a doctor’s union. The Society of Actuaries is a union. Many of these unions are, on the whole, net positive for society. The AMA has done some evil things, but we’re probably better off with it than without.

Second of all, I recognize that there is a diverse array of collective-bargaining arrangements, and that not all are good. “Unions” have a negative reputation in the United States and, sure, there are bad unions out there. There are also good unions, and good union workers. The negative reputation of unions comes more from propaganda and clever redefinition of the term “union” than anything else.

Traditional labor unions emerge (and, often, work very well) when the labor has been deemed by both sides to be a commodity: for example, coal production. If commoditization of the labor is irresistible, then the union tries to ensure that it happens on fair terms.

In fact, the strongest objections against programmer unions come from software engineers who resist allowing their work to be viewed as a commodity. To this, I am sympathetic and in intellectual agreement. It would be a better world if we were artisans. That said, we’ve failed to oppose commoditization of our labor for as long as we’ve been without a collective arrangement; in fact, we’ve been subjected to open-plan offices (surveillance) and “Scrum” practices that reduce our labor to piece-work. If we believe that our work is a commodity, then we should not be resistant to unions. If we truly believe that it’s not, then we ought to consider a professional organization or a guild system that can defeat the commoditization that has happened this far.

Professional organizations are unions that exist to uphold ethical principles (for doctors, don’t kill patients; for journalists, print the truth) that ought to supersede both economic concerns and managerial authority; they exist to dictate terms of partial commoditization. Likewise, high-end unions (as in Hollywood and professional sports) and guilds exist to enable those who view themselves as athletes or artists (and who might, therefore, insist that their work is not a commodity, but must sell their labor to others who see it precisely that way) to coexist with commercial players. Most likely, a structure that will actually work for software engineers will be somewhere between the lightweight unions of Hollywood and the professional societies that doctors and lawyers and actuaries have.

Third, I am (surprisingly?) more skeptical of software engineer unionization than supportive, at least right now. Why is that? For a while, software engineering has been dominated by the venture-backed startup culture. We’ve had the worst people driving out the good for at least 15 years, if not longer. We’ve ended up with an exclusionary macho-subordinate culture. We have sexism, ageism, classism, racism, and widespread harassment in our midst. We can’t blame all of this on management. Tech-company executives certainly use such divisions to keep software engineers from uniting and demanding a larger share of the value that we create, but we as engineers also allowed those divisions to exist in the first place. If we create a union without fixing our culture at the same time, we’ll probably generate one of the bad kinds of unions (the garbage-in, garbage-out principle). We should fix our culture first. That’s the top priority.

Fourth, we can nonetheless learn from existing professional and trade unions, and start figuring out what will and will not work. Most programmers believe that all unions negotiate (and regulate) compensation and that, therefore, top performers will see mediocre wages. That claim is demonstrably untrue: professional athletes have unions, as do screenwriters and actors, and nothing has prevented top performers from earning 7- and 8-figure compensation. Most likely, an effective union for software engineers would not try to set salaries, but would give employees additional rights when it comes to how they are managed. We could kill stack ranking and improve working conditions, and I don’t see a good reason not to do that. We could guarantee employees the right to an independent representative on issues surrounding performance management and (if necessary) separation. Again, that’s a no-brainer. We could force employers to act in a way that avoids harm to the individual’s reputation. Who would be hurt by that? No one.

Fifth, it’s important to note that it is impossible to unionize a workforce that doesn’t want to have a union. I’m well aware that Silicon Valley is terrified of unions. However, any company that declines to hire me based on my reputation or “cultural fit” is making two assumptions: (1) that I want to start a union and (2) that I can. The first is not especially true, and the second is very likely false.

Let’s be honest here. Software engineers are individualistic even when it goes against their own interests. Decades of Silicon Valley propaganda have worked. Programmers have been successfully divided into warring tribes based on gender, geography, age, and even choices of tools. An extremely charismatic person might be able to unite all these tribes, like Mance Rayder in Game of Thrones. As for me? My social skills are pretty average. It’s simultaneously flattering and annoying that some employers have thought that I’m capable of single-handedly unionizing their workers, in addition to everything else that I have to do. In reality, that’s not a credible threat.

Sixth, one should note that unions are often good for the companies that are unionized. The (incorrect) common viewpoint is that, since unions drive up workers’ wages, shareholders are losing money. This isn’t necessarily true. If the union’s effect is to make the internal labor market fairer and more efficient, and to improve workers’ conditions, that might actually result in the company getting more value out of its people. Companies can actually become more profitable after being unionized. The workers, after all, have no more desire to kill the company (that they’ve worked so hard to unionize) than the owners.

Who gets hurt by unions? Executives, the most. Managers lose authority and power, while their jobs become more complicated. They can no longer unilaterally terminate people, which means that extorting the workers into supporting their own career goals is no longer an option. For owners, however, unions often mean that a better product is produced. So, while wages increase, the net effect can be positive or negative for profits.

Managers dislike unions because they’re perceived to be a vote of no confidence from their workers. However, as pertains to Silicon Valley, such a vote is in order and has been for years, because Silicon Valley is so severely mismanaged.

Seventh, a company-specific union won’t work for software engineers. As in Hollywood, we have project-based careers and it’s normal to change companies frequently. The most important function of a technologist’s union will not be to protect jobs but to protect our reputations and our professional autonomy across the course of our careers. Any collective bargaining structure will have to be like the Hollywood actors’ and writers’ guilds in terms of conferring benefits that persist independently of the member’s specific employer.

Eighth, unionbusting is one area in which otherwise competing malefactors will cooperate. This isn’t surprising, but its implications are disturbing.

For example, in Silicon Valley, there are employers who check references on people not provided (“back channel” reference calls). Why is this practice, an instance of ethical degeneracy that facilitates illegal activity, tolerated? It’s because these companies want to share blacklists and bust potential unions. Although companies generally don’t care to share dirt on garden-variety low performers, because the legal risks of doing so are too high, the few people every year (almost invariably high performers) who are deemed to be suspected unionists will have a hard time. Companies will break their own rules, when it comes to those.

These employers know, of course, that most of the people named on these lists are no threat to them. They also don’t care. Paranoia trumps the rights of the innocent.

Why are they so paranoid? It’s hard to say. From where I’m standing, the probability of Silicon Valley technologists unionizing in the next ten years is very low. There is a sense in Silicon Valley’s elite that doomsday is around the corner, and programmer unions are one purported risk. Not only is this unlikely, but if it did happen, it would probably be beneficial to the ecosystem as a whole. A union (or guild or professional society) that killed Scrum and open-plan offices would make programmers more productive and result in better products and higher profit margins. The paranoia around “The U Word” just isn’t justified. For existential threats to the technology industry, I’d worry more about Silicon Valley’s investor class and the industry’s management.

The next 40 miles

As of April 2016, there isn’t much evidence of a desire among software engineers to unionize, professionalize, or organize in any way that would threaten the status quo. This might be the case because the startup culture is powered by its low-level workers’ unrealistic expectations (e.g. the fresh college grad who works 90-hour weeks because he thinks he’s going to be a CEO in three years) and because those who have more experience are usually purged. The VC-supported startup culture of misogyny and age discrimination (see: open-plan offices) serves to exclude people with the diversity of experience that we’d need in order to organize, and I believe that this is the true purpose of that culture.

What should we be doing? The first thing that we need to do is fix software’s culture. With any collective organization, we’re not going to have much success until we do this. We need to stop driving out women, programmers over 30, and people from non-traditional racial or career backgrounds. Diversity is a strength, so let’s not lose it.

Let’s go further and be blunt about a few things. Women, on average, have more organizational and social skills than men. Older people generally have acquired more of these skills than people who just got out of college. If we want to band together and kill employee stack ranking or negotiate for better parental leave policies or change a company’s performance appraisal system, then we can’t afford to tolerate a culture that divides us and that drives such people out, justifying it with juvenile explanations of “cultural fit”, and thereby leaving only the most manipulable. Why does Silicon Valley insist on hiring only young men? Because they don’t have the experience and organizational skill to threaten management’s interests. It’s easy to exploit them. While those fresh college graduates are almost never as good at programming as their more senior counterparts, that often doesn’t matter in a style-over-substance startup culture where companies exist to be sold quickly, rather than being built for the long term.

Next, and here’s where I get truly controversial: I think that it’s important to solve the charlatan problem. We have a labor market flooded with incompetents, and we suffer under micromanagement frameworks like “Scrum” that are sold on the promise of making such people productive (they don’t work, but that doesn’t always matter).

I think that the best way to drive out the non-programmers and the non-serious technologists is to implement an exam track similar to what the actuarial sciences use. We want to keep the programming profession open to everyone who has the ability, drive, and intellectual curiosity; while at the same time preventing management from flooding the labor market with an inferior-but-manipulable substitute: open-plan Scrum drones.

Exams aren’t perfect, and I’d love to refine the process to allow project-based alternatives for people who aren’t great test takers, but they’d bring us a lot closer to meritocracy than what exists right now, in which a programmer’s capability is usually assessed according to outmoded reputation metrics (which I’m good at gaming, but at the cost of my soul) like job titles assigned by employers. The field is polluted with biased and manufactured information. Just knowing who the legitimate programmers are, and how to find and identify them, would be a step in the right direction.

What I like about an exam track is that it’s blind. Your answers are scored by people who have no idea whether you’re male or female, white or black, 17 or 73 years old, Bay Area royalty or Blackfoot Lakota. I’d like a system where everyone who has the talent necessary to be in this field, and the drive to learn the broader field of computer science, because I think that such knowledge is important if one wishes to design systems correctly, can get in. Blind grading delivers a massive improvement over “culture fit”, under which a person can be rejected for not having played beer pong for ten years and not using PUA lingo in regular conversation. I know a lot of women who’ve switched to the actuarial sciences (from software engineering) for exactly that reason; they wanted a career where their progress would be measured objectively, rather than by a bunch of young, arrogant men using the “just like me” metric of human value. Who can blame them, really?

She rides for Zelos

Is what we do, as software engineers, important? Is it ethically meaningful? I would like to say “yes” but, at the same time, most of what I’ve done in my career has been meaningless to the advancement of human society. It hasn’t mattered, and that bugs me. I think that, as a community, we need to start thinking about the ethical ramifications of what we’re doing.

There are the small failures that I see on a day-to-day basis. One trait of true professionals is that people inside the profession never criticize each other to outsiders. That’s not necessarily for the purpose of secrecy, but because the outsiders will lack appropriate context and become dangerous, even if they don’t mean to be. For example, let’s say that two engineers disagree about source code formatting. It’s just one of those stupid topics that programmers get into heated conflict over, because there isn’t one clear right way to do it.

What happens, though, when such a dispute gets escalated to non-technical management? To the manager’s ear, it sounds like one of the engineers is willfully refusing to conform to “correct” source code formatting and therefore singularly responsible for the disagreement. Whom will that manager judge to be in the wrong? Well, since there isn’t an objective right way to format code, the person with more political pull will win, and the other will lose and be labelled as the nonconformist. Non-technical managers exposed to engineering disagreements, but feeling a need to Make A Decision, often take those issues out of context and become TWiGs (Toddlers With Guns). As a general rule, I think that we won’t be respected as a profession until we develop the social and individual skills to handle these matters, whether they’re minuscule disagreements or genuine performance issues, internally. As programmers, we’re far too quick to tattle on each other to management and that gives us the reputation of being spoiled, untrustworthy children.

Even still, there are much bigger ethical failures that are even more worrying than our internal affairs. I don’t consider advertising or statistical arbitrage to be unethical, much less evil. (In fact, Wall Street is, on the whole, far more ethical than its reputation makes it out to be, but that’s another discussion for another time.) However, some industries clearly are actually evil. Employee time-tracking software is a major business. Who writes this code? Someone does. That’s because we don’t collectively have the muscle to deny talent to the purveyors of evil. We ought to be applying our skills to finding cures for cancer, or alternative energy sources to fossil fuels. Most of Silicon Valley, however, is just helping rich people create and profit from unemployment.

The question I’ve had to ask myself, again and again, is: does any of this stuff matter? When I’m reading math or CS papers at 9:30 on a Sunday night, what on earth am I working toward? Is it worth it, given the professional adversity that comes with the fight, to keep giving a damn about the ethics of what we do as software engineers and to fight so hard to be a better person, both technically and morally? Can we actually exert any influence over whether technology becomes a force for good instead of evil? I don’t know. I wish I did.

What gives me hope, oddly, is the adversity that I’ve experienced. I wouldn’t have had to deal with any of the nonsense that I have, if these ideas (even when expressed, as I have, with moderation and justified trepidation) weren’t a threat to people. My experience at Google has had some undesirable publicity. A billionaire venture capitalist extorted Quora into banning my account. Paul Graham blocked me on Twitter, even though we had no significant interaction (we spoke for about two minutes about programming languages in 2007) to that point. The bad guys know who I am. This is indicative of a high likelihood that I’m on to something.

I believe that we, as technologists, can take back our industry. To do so, we need to stop treating our careers as things that other people (managers) and institutions (employers) make happen to us, and step up and make things happen for ourselves. Toward this end, we need to work as a collective and develop a “slapping one is slapping all” mentality. If someone is denied a job based on a back-channel reference call, then it’s morally incumbent on those with knowledge to let that person know, and help that person raise hell. It is the morally right thing to do.

On the same token, we’re in a state that is both degraded and paradoxical. In some ways, we need to make the software industry more inclusive. We need to halt the processes that drive out older programmers, women, minorities, people with disabilities, and others who are culturally vulnerable. We need to make a culture that includes and welcomes all people of talent, and not just people who look like me. At the same time, we need to make the industry more exclusive, in the sense of protecting everything that matters. There are a lot of programmers with no intellectual curiosity, nor any sense of craftsmanship, and those open-plan Scrum brogrammers (who implement the sexual harassment culture and the ageism that needlessly drive out talent) just need to go. This is why I am so adamant in supporting the institution of an exam track. We should let in the people who have the drive and curiosity to become good at programming, while excluding the incompetents who are here because they’ve heard that there’s easy money. (In other words, we need to replace a “get rich quick” easy-money culture, built on lies, with a “get rich slowly” hard-money culture, built on the truth.) The easy-money crowd creates a culture of immaturity, narcissism, centralization of managerial power, and harassment, instead of a culture of technological progress and self-betterment.

Let’s talk about the near future. The VC-backed bubble is going to end, just like the one in the late 1990s did. The toxic unicorns are likely to die, and they might take Scrum and open-plan offices and the harassment culture with them. Let’s hope so. However, my fear is that, due to our lack of organization, legitimate programmers might also see our salaries and conditions decline. How confident should we be that a falling tide will only wreck the boats of the easy-money brogrammers and the sleazy tech executives? I’ve been in this industry for too long to have that kind of confidence. The crash will hurt. The scumbags in charge of Silicon Valley will do everything they can to make sure that we lose our shirts before they lose theirs.

What happens then? When jobs disappear in large numbers, worker leverage declines. Motivation to fix the problem increases (read: people get pissed off) but the ability to do so declines. We can’t assume that “the crash” will hurt the bad guys and not the good. If we don’t plan and organize around our own interests, one way or another, then history suggests the opposite.

The problem faced by those of us who are legitimate technologists is that the labor market has been flooded with an inferior product: the open-plan Scrum brogrammers who aren’t good or even acceptably mediocre programmers, but who look the part (“cultural fit”). They can’t fool real programmers, but they can fool non-technical management and even investors. Why are they beating us? For one thing, they’re winning the battle for cultural influence, just because there are fewer legitimate programmers than charlatans who can be trained and paid to act the part. The open-plan drones have redefined “programmer” to mean “naive semi-privileged idiot” instead of “trusted professional”. They’ve also been able to sell, to management, a perception of flexibility. You can hire twenty-five of them in an afternoon, and they’ll work long hours and tolerate punishing work conditions. This would sound great, except for the fact that their technical abilities are abysmal. They’re so terrible, in fact, that they’re often negatively productive on a project with any technical meat. Worse yet, if they’re given any creative or ethical responsibilities, the results are catastrophic.

Why has the market for programmers been flooded by inferior replacements? The perception is that technical excellence doesn’t matter, and that the market situation proves this. I disagree strongly. As far as I’m concerned, if a technical problem isn’t solved correctly, it hasn’t been solved. A mathematical proof that covers four out of five cases is not a proof. Who would want to use an email system that drops one message out of ten? Would people use a social networking site if their private messages were exposed to the public because of irresponsible engineering? Technical excellence matters a hell of a lot. Getting the right answer is important. Unfortunately, short-term incentives often produce disalignment, and long-term consequences take so long to come to fruition that powerful individuals (such as technology executives) can avoid accountability for their own bad decisions.

Bad software engineering and terrible corporate cultures hurt companies. They hurt society. The problem is that they never hurt the individual managers and executives who create these illnesses, in search of short-term profit. Those people get promoted away from the messes they create, and someone else gets stuck having to clean up.

How do we fix this? I don’t know, but I know who will have to solve the problem. It will have to be us, the competent and responsible technologists. No one else knows just how much society is losing right now. Most of society doesn’t even know that the technology industry is so badly run, much less how much value has never been realized because of that fact. Only we, right now, are remotely aware of that. All of this said, we’ve solved problems that are much harder, and we can hack this one. If we can put a person on the moon, we can organize around technologists’ shared interests (as well as those of society). We just need to wake up to the fact that this time it’s a political problem rather than a technical one that is most important.

Appendix: questions I get asked

Here are some questions I’ve been asked, in my personal and professional life, on the topics above.

“Why did Google think you were a potential unionist?”

I said something on a mailing list, pertaining to a specific product, that had a lot of internal visibility. This was a mistake on my part. The comment was taken out of context and developed a life of its own, leading to a suspicion that I was a union organizer.

It wasn’t, and isn’t, true. At the time, I held no strong opinions about software unionization. After having my reputation damaged (something that unions could protect a person from) by this experience, I am far more sympathetic to that cause.

“How did you find out that you were on a Silicon Valley unionist list?”

I’d heard rumors for years. In March 2014, a source sent me internal documents that confirmed the list’s existence and that I was, at one time, on it. Another source corroborated this, two months later.

Later that year, I applied for a job and was told (by a close friend inside that company) that I had been turned down for an interview only because my name appeared on “Google’s List” of suspected unionists.

“How is blacklisting enforced?”

Since these lists are illegal, they are usually only shared at an executive level.

The targeted person is usually subjected to negative, unsubstantiated rumors pertaining to health, work performance, and (until recently) sexual orientation. For every person in-the-know who engages in deliberate unionbusting, there are several “useful idiots” whose role is just to reliably repeat the gossip they are fed.

“How many people are on suspected unionist lists?”

Because these lists are illegal, it’s hard to know. An educated guess is that this affects at least 200 people per year in the U.S. technology industry. While this represents less than 0.01% of all professional programmers, a surprisingly high number of people on these lists are pillars of our community.

Many people are placed on these lists by mistake and often they have no idea what is going on. If they’re older, I’d presume that they’d attribute their difficulty in finding employment to age.

“Does Google still retain or participate in this sort of unionbusting?”

I don’t know. I’ve been told that Google dismantled the machinery that it uses to track suspected unionists.

However, if the company still uses stack ranking (and I do not know whether it does) then it is overwhelmingly likely that the unionbusting is just hidden in the stack-ranking machinery.

In fact, the point of employee stack ranking is, traditionally, to intimidate employees and prevent them from organizing. Rich companies actually don’t care about harmless low performers who draw salaries but do little or nothing. (I’ve seen people like that hang on for years.) They do care about unions. They’re scared to death of an organized labor force.


“Have you pursued legal action?”

I’ve received settlements due to specific cases of defamation.

I do not believe that I can pursue the larger matter without putting the careers of my sources in jeopardy, so I won’t. Moreover, I live in Chicago and am not especially concerned with whether my name is on a Silicon Valley blacklist.

“I think I’m on a unionist black list. How do I get my name off of it?”

I have no idea. It’s almost impossible to get hold of these lists, because their existence is illegal. My advice would be to leave Silicon Valley.

Wall Street is less paranoid about unions, insofar as it actually pays its people. It’s unlikely that they care about Silicon Valley’s suspected unionist lists. If your professional reputation, in the broader sphere, hasn’t yet been trashed, consider it. If it has been damaged, then you need to hire a competent attorney.

“Someone said you were a poor performer at Google. Were you?”

I have to answer this one directly and truthfully. To start, I’ve been an average or strong (and usually quite strong) performer at every job that I’ve held. However, I was only at Google for five and a half months and did not get much time to distinguish myself. So, I would not place myself in the “strong” bucket, at least not on technical contributions, during that time.

It’s rare, at Google, that someone commits substantial code in the first 6 months, because of the amount of time that it takes to learn the codebase. My experience wasn’t an exception. Since I was there for such a short time, and spent much of that time in a war that I didn’t start, I didn’t get to check in much code. That part is true.

I was considered by my peers to be an average performer.

“Do you expect me to believe that everyone who doesn’t like you is part of an illegal unionbusting conspiracy?”

No, of course not. It’s impossible to have any public presence without being disliked by someone. I hold strong views, and I express them boldly.

Had I not been placed on a suspected unionist list in 2011, I doubt that I would have had nearly the amount of publicity that I’ve had. If I had any notable reputation, in this alternate world, it would be a strongly positive one, as it is among people who actually know me.

“Why do Paul Graham and Y Combinator hate you?”

That is unclear, actually. They started the fight, and I’d be happy to end it. I would guess that it’s a mix of political disagreement and the personal pettiness of specific individuals.

“Why are you banned on Hacker News?”

See above.

“Why are you banned on Quora?”

Quora has received funding from, and is at this point largely controlled by, Y Combinator. “An investor” (probably from Y Combinator, but this has not been confirmed) threatened Quora’s management that, unless I was banned in a public way, they would be rendered unable to raise future funding. Vendetta-driven, intentional product failures are more common in Silicon Valley than most people would like to believe, and they come as often from a company’s investors as from its management.

I was a top contributor to Quora. I had more than 8500 followers (and, now, over 9000) and was a Top Writer in each year. My answers were published to many of Quora’s media partners, including Time, Fortune, the BBC, and the Huffington Post.

Because I was a model contributor, my ban caused a substantial loss of faith, among Quora’s community, in its moderation. User engagement, especially among users deemed to be high-value content creators, began to decline. In reaction to this, Quora asked me to rejoin its community in February 2016, and offered a cash settlement. I turned their offer down.


“Could you sign a document agreeing not to support a union, as a condition of employment?”

I don’t think that that’s legal.

In truth, I would find it needless to sign such a thing. Given programmers’ strong anti-union/libertarian tendencies, starting a union among would require monumental organizational ability and charisma. I consider myself very average in those abilities and would not constitute a credible thread.

Besides, I’d rather write code and solve hard technical problems. Organizing labor is admirable and important work, but that’s not my skill set.

“Why do you air ‘dirty laundry’ about ex-employers?”

I don’t. I made that mistake, once, with Google.

There are some who believe that “biting the hand” deserves a professional death penalty. Such people should be counted among the enemies of progress and treated as such. Nonetheless, I still believe that it is unwise, for an entirely different reason, to disparage an ex-employer.

That reason is this: in practice, the workers are more exposed to the corporate reputation than the executives (excluding the CEO). Bad-mouthing an ex-employer hurts the wrong people. That’s why I’ll almost certainly never do it again.

I have exposed unethical behavior by Y Combinator and Quora. I never worked for either of these companies.

“Do you, broadly, support Silicon Valley unionization?”

As above, what’s more important than formal structure is fixing the culture. I would rather commit my time to (a) upholding and promoting a culture of technical excellence while (b) driving out exclusionary behaviors.

Until we have the right culture, formal unionization is probably the wrong way to go: garbage in, garbage out. Developing an exam system (as in the actuarial professions) or some other way of formally verifying talent and credibility is a much smaller step that I would support immediately.

“How do you respond to people who believe that unions in Silicon Valley will kill its ability to innovate?”

Of course, that is possible. Silicon Valley seems to be a nadir in terms of innovation, but the wrong labor structures could make it even worse.

However, a collective arrangement, which probably would look more like a professional organization (Society of Actuaries) or a lightweight union (Screen Actors Guild) than a traditional labor union, could also improve the situation greatly. How so? If people who actually make things get more respect, they’ll be more engaged and the products made will be better. If they make more money, they’ll be able to fund more interesting projects and “bootstrap”.

For the past twenty years, the biggest threat to Silicon Valley innovation hasn’t been some union bogeyman but the technology industry’s management. A common slur against unions is that they can promote a culture of complacent, arrogant mediocrity. That culture already exists. See: Scrum, open-plan offices, business-driven engineering. Unions could make that culture worse, but they could also drive it away.

“Have you ever tried to start a union?”


“What do you think of Google now?”

I harbor no ill will toward Google. In fact, I met a lot of very talented people during my time there, and it’s upsetting that there was so much awkwardness surrounding my own experience, which was atypical for me and for Google.

“What have you been up to recently?”

I mentioned above that I was selected to be the VP/Engineering for a company that, for reasons having nothing to do with me, decided to non-exist. Oops.

Since that opportunity self-vaporized, I’ve been looking around the area for people who need top technical talent: machine learning, functional programming, or distributed programming, at a Principal+ Engineer (as an individual contributor) or Director+ (as a manager) level.

I’ve got 10 years of experience as a software engineer, know more machine learning and statistics than most self-proclaimed “data scientists”, and function well as an individual contributor as well as in leadership. So I don’t expect to be on the market for too long.

To keep myself sane, I’ve been spending a lot of time at the gym and reading a lot of math papers. I’ll probably pick up a book on Erlang/OTP, because that platform seems interesting.

“Have you had significant employment difficulties, because of your vocal support of programmers’ rights?”

Severe ethicality can provoke cowardice in some people, sure, so I haven’t been entirely free of professional adversity; but yes, I’m able to get jobs.

I have had to work very hard to overcome adversity. I’m often reading technical papers at 9:30pm on a Friday night. (“A mind needs books like a sword needs a whetstone.”) I don’t get much rest. I’m always leveling up on something.

“What can I do to help?”

For me, there’s no need to do anything. I’m not a victim. I’m an extremely competent technologist and I am beating this.

What one can do is to prevent the sorts of things that I’ve described from happening to anyone else in the future. Blacklisting or attempted blacklisting are never acceptable, not for any reason.

Certainly, if you’re aware of a unionist blacklist at your company or between companies, share it with the press. If a company conducts a back-channel reference call (except in the context of a federal security clearance, where those are ethically acceptable) then make sure the target knows who was called and what was said. Inform that person of his or her rights under the law.

When people are denied jobs for reasons other than the ability to do the job (e.g. “culture fit”, unsubstantiated rumors, illicit back channeling) it is critical to let them know what happened, and that they should consult an attorney to see if they have legal recourse. This is the only way to actually enforce the laws against unionbusting, blacklisting, and discrimination in employment.

“In the end, what did you fight for?”

I must confess that some of what I am is a product of accident.

At Google, I’d been that the company was open to internal critique and, since it was my first big-company job, I was naive and took that directive literally. Random chance had it land my name on a list of suspected unionists. This has had a positive effect on my work ethic (since I’ve had to work much harder than most of my peers, just to survive) but a negative effect on my view of human nature.

Through that experience, I’ve become committed to pursuing social justice in technology. Not only is it the right thing to do from a moral perspective, but it’s the first step toward something bigger: taking back our industry.

If we can make the technology industry more diverse, and fair, and truly meritocratic, then we can solve more important problems and build better products. We can get ourselves out of the rut that has us implementing the ideas of wealthy, greedy actors who have no love for technology, and we can kill the build-to-flip business culture. In doing so, we can replace our industry’s current form with one that a person can actually believe in.

The software industry handles failure poorly.

The startup world venerates failure. Well, sort of. It venerates business failure, so long as it happens in a way that the investors approved of. The VCs will manage the careers of the founders, and no one else involved with the failing company (i.e. the people who lose jobs) is considered to matter. “Failure is awesome!” “Go out and fail!” There’s plenty of talk among technology people about “failing fast”, and yet what I’ve observed is that our industry handles failure quite poorly.

Of course, the software industry is full of failure. In the corporate context, the term failure is somewhat subjective, but I’ve heard that 60 to 80 percent of software projects fail. I don’t doubt that number at all. I’m going to put it higher: I’d guess that 97 percent of software efforts fail. Admittedly, my standard of success is high. How do I get to 97%? Well, about 40 percent of projects are cancelled before they deliver anything. That’s a clear case of failure (“total hard failure”). Another 30 percent can be salvaged and reformed into something, but underperform the expectations set based on their headcount and resources, and are written off as embarrassments (“partial hard failure”).

In other words, 70 out of 100 software projects are hard failures. That’s not controversial. Of the remaining 30, half of those (or 15% of the whole) are soft failures: well-made, good products that go unused, or are regarded as failures, for purely political reasons. In other words, there was no technical failure, but the product did not succeed politically. The software itself was just fine– possibly, quite excellent– it’s not going to get an engineer promoted. This leaves 15% that could be seen as successes. Of them, four-fifths (12% of the whole) devolve quickly into disliked legacy monsters that, even though they get launched and become critical to the business, are viewed with enough disdain that, while managers of the projects might get promoted, the engineers are under constant criticism for the shortcomings of the system (which may have more to do with time constraints than engineering shortfalls). This leaves only 3% of software projects that are able to (a) solve the problem, (b) do so using a resource budget deemed acceptable, (c) succeed politically, and (d) continue to be well-regarded enough to make the case for an engineer’s promotion. The other 97% of projects? They add some value to the business but, from a careerist perspective, they’re a total waste of time, because they don’t confer the political capital that would give their engineers the credibility to work on better, harder, more critical problems.

Given the low success rate of software projects, even though many of those causes of failure aren’t the engineers’ fault, it’s probably not surprising that this industry has such a high burnout rate. Our success rate is comparable to what one might expect out of an R&D lab. No one holds a researcher in low regard if many of her ideas fail; if that doesn’t happen, that often means that her aims aren’t ambitious enough. Sub-50% success rates are expected in R&D, with the understanding that the successes will pay for the exploration costs (a polite term for the costs of the failures). We, however, are held responsible and sometimes fired for software project failures, which are presented as never acceptable.

The 70% that I call hard failures can lead to project cancellations and, in bad economic times, lost jobs. So can the 15% that are soft failures (i.e. working software rejected for political reasons) although it is less common. Successfully completing something that goes unused will not get a person fired, but it does not protect a person from layoffs. The 12% that evolve into hated legacy assets rarely end jobs. In fact, they create jobs in the company, but they’re usually undesirable maintenance jobs. Those projects turn out well for managers but, for engineers, it’s rare that they make a case for promotion. Managers can say that they “successfully delivered” these projects and it’s not really a lie, but the programmers are expected to stick around and maintain them. Taking this all in, one sees that the projects that can really make a case for someone as a programmer require a large number of things to go right, and almost nothing to go wrong.

With the battle scars that come with age, programmers tend to develop prejudices that make little sense to anyone else. I’ve heard people say that it’s “impossible” to write production code in a dynamically typed language. It’s not. I personally prefer static typing, but high-quality production code is written in dynamic languages every day. I’ve heard others bash practices and tools based on previous experiences with software failure. It’s a multiple-cause attribution behavior that doesn’t make much sense.

When negative outcomes are rare, it’s sensible to attribute them to all causes. Let’s consider traffic accidents. On a drive, the positive outcome (no collision) is common and the negative one is very rare. Most traffic accidents have multiple causes: the driver was exhausted, it was snowing, and the intersection was poorly designed. It’s rare that such accidents happen because one thing goes wrong; it usually takes multiple things going wrong at the same time. The conclusions that are drawn, however, are valid: driving while exhausted is dangerous, snow makes driving more dangerous, and badly-designed intersections are dangerous. Blaming all causes makes sense.

However, when negative outcomes are common, it’s often because a single cause can ruin the whole thing. This is the nature of software. True successes are damn rare, most projects fail, and small things can cause great work to be all for nought. This is worsened by the fact that, in the corporate theater, people in power can easily obfuscate the causes of things going wrong (or, to put it more bluntly, shift blame). In the example of the traffic accident, there were multiple causes that were genuinely involved. In a corporate failure, only one cause is necessary to make it happen, but the executives can generate multiple causes that might have been involved, and they’ll frequently do so, for obvious self-serving reasons.

Let’s say, for a concrete example, that a software project is started in Python, because it’s what the engineering team knows and likes using. (It’s not my favorite language, but it works.) Four months in, there’s an executive shake-up and a new CTO comes in with a strong pro-Java bias, because it’s what he knows. In addition to this technical bias, he needs to Make Decisions on his first day, “to prove he’s serious”, so he waltzes over to one of the teams under him (this team) and, without taking time to determine why the team chose the tools that it did, he says, “There’s no way that Python can scale; use Java instead”.

What happens? Over time, the strong Python engineers leave, while the weak ones stay and begrudgingly switch over to Java. Additionally, the language overhaul turns out to be more complicated than expected and, due to time constraints, it can’t be completed. So the project ends up being a Python/Java hybrid app with poor communication between the two halves. The app is delivered, but it’s slow and buggy because the good Python engineers have left and the weak ones weren’t able to write performant code in any language. The real cause of the failure? A CTO who interfered with a functioning team. The official version of events? The CTO was right; Python simply “cannot scale”. It can’t be the idiot CTO that killed that project; Guido von Rossum himself did it.

What’s worst isn’t just that this pattern of behavior exists, but how quick software engineers are to buy into the “official” story, and attribute negative outcomes to the technologies that their projects relied upon. I’ve met so many software engineers who absolutely hate technologies that weren’t to blame, at all, for the project failures attributed to them. Did the project really fail because of Python? Or because test-driven development just takes too damn much time? Or because XML is a hideous mess of a standard (which it is, but that’s almost always irrelevant)? I strongly doubt it. In almost all cases, I’d guess that the project failed because of managerial or political factors (read: executive incompetence) and that technologies were simply blamed because the “official” narratives get written by (no surprise) executives. Programmers and technologies have been a dumping ground for executives’ failures for decades and, as bad as that is, it’s even worse that so many programmers are stupid enough to believe these official narratives– and go on to future jobs and spout idiocies like “Python cannot scale.”

Software project failures are painful, especially for programmers. Although failure is common in our industry, we don’t really get permission to fail. Companies can’t fire everyone when a software project fails, because even the most evil companies don’t want that much turnover. Still, if a project lands in hard failure, it’s pretty much a guarantee that at least one person has to lose a job, and the manager’s going to make sure that it isn’t him. Soft failures don’t necessarily end jobs, but they stall careers, make people nervous, and push people to quit.

After a few years in this industry, everyone has enough experience with failed projects (including, again, successful projects that failed politically or that devolved into mediocrity for political reasons, especially including feature creep) to have battle scars, and that gives each of us a long list of technologies and patterns that we find ourselves hating. Don’t get me wrong, either: many of our bugbears are legitimately hideous. I can’t stand the singleton directories (e.g. “com”) of Java projects, the broken handling of loop variables in Python closures, or the monkey-patch facilities of Ruby. These are all ugly things, but I think it’s extremely rare that any one of them ever single-handedly caused a project to fail.

To me, it’s weird and inconsistent. The startup world venerates failure, but only when founders do it. A VC-backed founder who steers a business into a reef is lauded for taking the risk and either “acqui-hired” or given another company. A programmer who fails (or, more often, who has the misfortune of working on a project that fails, which is often not her fault) is just regarded as a loser. Programmers are expected to have career narratives of one success after another. This is inconsistent with the fact that about 40 percent of software projects fail totally, and another 57 percent fall into murky non-success or uninspiring mediocrity. Worse yet, programmers often don’t know (due to managerial secrecy and the pervasive cover-up culture) the real reasons why their projects failed and, even if they do know, can’t speak the truth in public or in a professional context such as a job interview (where “bad-mouthing” an ex-manager is taken to be a crime as serious as murder or rape). Thus, they come up with a myriad of faulty explanations for why things went badly on previous projects. “What we learned… is that Python cannot scale.” That’s not true at all, but it’s more socially acceptable to say that on a job interview than to say the truth for most values of “the truth” (which involve frank incompetence at executive levels).

Writ large, these non-truths get repeated often enough that they generate a cloud of tech hate. Every language, every framework, every tool, just sucks. Python “can’t scale” and Java programmers “are all morons” and C “is insecure and dangerous” and Haskell “is impossible to hire for”. That’s the narrative. Some tools do suck, and far more often a tool is inappropriate to the purpose (e.g. Java when one wants rapid development, or Python for low-latency systems) to which it’s put. However, nuance is lost in the cloud of embittered tech-bashing, and the image that it projects of our work as a whole (and of us) is that most of what we build is pure garbage. Is it any wonder, with such a culture, that programming remains a low-status profession?

Pyramid vs. obelisk

I may not be able to save the world, but I think I’ve come up with the fundamental idea that will be necessary in order to restore (and, afterward, preserve) the integrity of, at the very least, the software industry.

Software may or may not be “eating the world”, but we’re learning that the class of people who can rigorously solve problems using precise thought is one that ought to be held in high demand. Such people need not necessarily be writing software programs per se, but we’re in a world that cannot grow based on ego and bluster (the traditional traits used to select business leaders) alone. That approach has been played out, and we need competence to get any further. There are a variety of literacies that a leader needs in order to do her job well, and that set of needed capacities is too large for me to tackle right now, so let’s zero in on one important job, which is software engineering. As a society, we need it. We need it, done well. Unfortunately, as a society, we seem to be getting worse at it.

The problem is that most of the people who employ programmers have seen fit, over the past two decades, to diminish this kind of work as much as possible. There have been so many efforts to replace professional programmers with interchangeable, unqualified peons that it is, frankly, sickening. This is, of course, the real purpose of the “Agile Scrum” fad that has infested our industry, as well as the fetishism for open-plan offices. Dan Lyons’s new book, Disrupted, describes the ageist culture that emerges when we allow the industry to drive software, rather than the other way around. We’ve seen competent engineers driven out in favor of cheap, young, and (most importantly) obedient commodity-grade programmers hired to inflate headcount numbers and appease investors. It needs to stop, and now, before this industry gets so thoroughly infested with incompetence and anti-intellectualism that a generation’s worth of work and experience will need to be discarded. So how do we stop it?

Although it shouldn’t be this way, software engineering is pyramidal in shape. For every engineer who gets to work on a real project, there seem to be fifteen grunts only trusted to churn through Jira tickets. We need to reform our professional structure to be more like an obelisk, instead.

Architecturally, a pyramid is the most basic, inefficient way of achieving prominence (or, less pedantically, height). It’s imposing, it’s gigantic, and it historically required massive amounts of slave labor as well as natural resources. Ancient Egyptians built pyramids because they didn’t have the architectural know-how (namely, arches) to build much else at scale. In economics, one invokes the pyramidal shape when describing arrangements that deliver massive yield to the few (at the top of the pyramid) while exploiting the larger lower classes (at the bottom). Hence, the term “pyramid scheme”. One could argue that pyramids (of both kinds) are entropic. A pile of sand or stones will assume an approximately conical shape, and it’s likely that this inspired the (architecturally unsophisticated) pyramids of Ancient Egypt. Likewise, an ill-organized or disorganized set of people will devolve into an oligarchic arrangement, with many supporting the few.

Professional pyramids mean that positional scarcity will exist at every level in the profession (the term “profession”, here, used loosely) as the number of positions declines as an exponential function of experience. It’ll be as hard to stay in the profession as to get in. Except for sociopaths who enjoy social competition for its own sake, most people find this undesirable. Doctors and attorneys wouldn’t be happy if they had to continue fighting for fewer spots with each year of age, even 30 years after graduating. Not only would it be bad for individuals to have that degree of positional competition, but it would also be a waste for society. To push out 50-year-old doctors simply because they “failed” to arrogate managerial positions would be to discard hard-won expertise, and it would generate a culture of inexperience.

Left to their own devices, management hierarchies become pyramidal. If a company of 10,000 people decides on a 12:1 reporting ratio (as is typical, in software) then it’s going to have 9,167 grunts, 764 first-level managers, 64 directors (managers of managers), and only 5 people at the executive level as traditionally defined. Why can’t there be 100 directors instead of 64? There could be, but few companies would want to pay for them if given the choice. That pyramid seems painful, and it would be, if that’s how companies worked. At the top, they often don’t, and I’ve written before about the superior collective intelligence of MBAs as opposed to software programmers. People with elite MBAs aren’t stupid. They realize that they’re going to get screwed under the traditional pyramidal regime. Not only does the pyramid system make it unlikely for people to advance, but they also leave the people in high positions unprotected, because there are too few of them. CEOs promote loyalists not because the companies need so many executives, but because the CEOs are smart enough to realize that sharing rank and prominence is essential for their own survival.

One could argue that business executives, as a group, “conspire” against their companies by working together in order to convince their organizations that they’re more needed than they really are, and that more of them are needed than really are, and that they themselves as a class (executives) can recognize talent in a way that no one else (not shareholders, not workers) can. By doing this, they’ve managed to create massive demand for themselves and their skill set (which is perceived to be more rare than it actually is). On this topic, I’m hesitant to use the word “conspire” because I don’t consider it morally wrong. Corporate life has become a culture of resource extraction, and the executives have simply been the best players at that game. Ultimately, there are very few people who would hesitate, if offered a magical device that would imbue in their employers an extreme and superstitious overvaluation of their work, to use it for personal benefit. Executives have simply managed to do exactly that as a group, rather than as individuals, the latter being far more difficult.

Corporate executives, based on manufactured demand, as well as doctors and attorneys based on actual demand, have taken their leverage and replaced the pyramidal structure with a professional obelisk shape. An obelisk is upright, rising far higher than a pyramid of comparable mass could, until the pinnacle. With a professional obelisk, there’s plenty of room to grow until one encounters positional scarcity. Doctors don’t need to beat down other doctors and climb management hierarchies just to advance; they just need to gain more skills and deliver more value. Likewise, MBA-toting corporate executives might face stiff competition if they want to be Fortune-500 CEOs (because there are only 500 such spots) but they deal with minimal competition if they just want 7-figure incomes and fancy titles by age 50.

An obelisk structure shows that a value is placed on expertise: as long as people continue to improve themselves, there’ll be room for them at a level higher than where they are. Progress will be recognized. Why does the medical profession intentionally create an obelisk-shaped, guild structure? It doesn’t want “Agile” 17-year-olds performing surgery and driving out competent surgeons who didn’t pursue management jobs. It also doesn’t want people taking obscene risks due to an up-or-out culture that persists into one’s 40s and 50s. Finally, it doesn’t want a culture where the median practitioner has less than a decade of experience.

The pyramid is the entropic formation. Without a deliberate professional structure, it will emerge. Why so? An unprotected “profession” (note the oxymoron) will expand until it absorbs the most unqualified people, who draw barely more compensation than they’d get anywhere else, but who are able enough to convince people (unqualified buyers) to buy their services that they can stay afloat. The unqualified and minimally invested (the “quacks”) will, by sheer numbers, outvote the genuine practitioners (who’ve often dedicated their lives to becoming good at something) and dominate the culture– until their incompetence starts hurting people. In medicine and law, this outcome is considered so unacceptable that it’s not controversial for these industries to have professional structures that function as labor cartels. We need to start thinking the same way about software. We need to start driving out the charlatan brogrammers for whom the infantilizing open-plan startup culture was created.

So how do we get an obelisk-shaped, protected profession for software engineers?

Let’s focus on something that hasn’t worked, and look at why it has failed. Many companies have created “engineering ladders” that purport to measure professional development for individual contributors and allow people to grow without becoming managers. There might be a manager-equivalent title (Staff Engineer) and a Director-equivalent one (Principal Engineer, or Architect) and a VP-equivalent one (Fellow). What goes wrong? To start, the disparity in difficulty makes the whole thing extremely self-congratulatory on the part of the managers.

To become a Director- or VP-equivalent engineer, you have to compete against some of the smartest people in the company. It’s legitimately difficult. There are smart, qualified people who don’t make it. If anything, these dual-track systems serve the interests of management by allowing the executives to overstate their value and capability relative to engineers; if one has to be a genuinely great engineer to become Director-equivalent, the thinking is, then engineers will hold Directors and VPs on the much-easier-to-climb managerial tracks in higher regard.

Ultimately, these systems don’t work. Rather, they serve to document and justify an existing pyramidal scarcity. The organization will only allow a small number of engineers to climb into positions with the credibility and compensation afforded to senior managers, because engineers lack the collective organizational skill to thwart the penny-pinching game being played against them. Thus, a pyramid forms, and little is solved, because positional scarcity persists at each level.

I’ve been around for long enough to know that trusting employers is not a strategy. In the long term, it always fails. Don’t get me wrong: there are good companies out there, and there are managerial strategies (e.g. open allocation) that can improve the odds of creating one. Some individual employers probably can be trusted, but this doesn’t scale, because most organizations become pathological and self-serving at some point. So, if we can’t trust employers, then we can’t put stock in employer-assigned titles as a way to insure the credibility of the individual programmer. That means that we need an employer-independent system for doing so. We need a source of credibility for the individual that no employer can touch.

Doctors and attorneys have employer-independent credibility, being vouched-for by their professional associations (in the U.S., the AMA and the ABA). That’s necessary because the core of a profession is that one’s ethical obligations supersede managerial authority. That is, a doctor can’t use the superior orders defense to justify killing a patient. Of course, that only works if the professional has some sort of employer-independent credibility– in which case, losing a job is a transient financial annoyance rather than a possible career killer. If the individual lives entirely on his reputation as made by prior managers, his lack of leverage becomes a lack of moral agency. That is, of course, the problem that most programmers face. We lack moral or even creative autonomy, because we lack employer-independent credibility; once we leave school, our value is derived from the titles and recognitions given to us by profit-seeking corporations.

Some programmers hoped that the open-source world would fill this gap. It can’t. That’s not its job. Evaluating a programmer’s Github profile is a subjective task, and the only objective way of measuring one’s open-source contributions is based on how many people use the software. Here’s the issue, though. Few engineers have the time and resources available not only to build usable open-source projects, but to market their work to the global programming community enough that it actually used, and then to support it once it is. The positional scarcity and power-law distribution of open-source prominence emerge not because the open-source community is full of mean people (it’s not) but because (a) there’s no barrier to entry when it comes to just putting code out there, while (b) most of this code will never be used, because people have an entirely understandable (and, in fact, prudent) aversion to using unproven software.

How do we create an employer-independent credibility for software engineers? As far as I can tell, there are two solutions. The first is one that I hate: to require formal, institutional education. I hate it because it’s hostile to career-switchers, astronomically expensive, and can generate a culture of anti-intellectual credentialism. This leaves the second, which is to create an exam-based system, similar to what the actuarial sciences use. As programmers pass more exams, their professional credibility can grow in an employer-independent way. This would not only give us job security, but the improvement in our general leverage would enable us to negotiate for better working conditions and launch better projects, to the benefit of society as a whole.

Are there problems with exams? Of course there are. For one thing, I happen to believe that “bad test taker” is a real thing. (I was always a good test taker, but I’ve met intelligent people who aren’t.) We’d have to adapt our system to legitimate learning disabilities and to allow project-based alternatives for great programmers who struggle with traditional exams. We’d need to decide what to charge for the exams (as well as the presumably more expensive project evaluation) and how to offer financial aid. We’d have to decide what would be on the exams, and how many there would be. (I, personally, doubt that I’d be qualified to design such a curriculum.) What is an Associate-level programmer, and what’s a Fellow-level engineer, and what do we expect each to know? What are the continuing education requirements going to look like? These aren’t easy problems to solve. On the contrary, they’re quite difficult and organizationally complex.

Here’s the thing, about that. Designing a curriculum and exam track for software engineers is a massive amount of work, much of it ugly. I doubt I’m up to it, personally, because I’d rather have it built by experts in their respective fields of computer science. However, the core idea is here is one that will actually work. Blindly trusting employers hasn’t worked, and fetishizing startups has been a hilarious failure. Over the past twenty years, the programming profession has lost ground. We’ve been infantilized with open-plan offices and “Agile Scrum” practices that senior engineers don’t need. We’ve been left without any form of employer-independent credibility, which is what we’ll need if we want to become a true profession. We’re in a state of squalid failure. Our professional shape is a pyramid rather than an obelisk, making the job painfully political at every level. I’d rather see an imperfect solution (such as a professional curriculum and an employer-independent, exam-based advancement system) that fixes this than no solution at all. This problem won’t be solved by one person and it certainly won’t be solved within a year, but I’m confident that I know, at least, where to start.

It’s time to stop hating and start learning

There’s a strong resentment that software engineers and technologists have toward “non-technical” people or “business people”. We compare our situations, and our impulse is to find the arrangement unjust. Based on fundamentals, things are a bit off. We technologists work harder, tend to bring more in terms of natural ability, and yet “they” (meaning executives) make three or ten or fifty times as much, for considerably less work, and they have a lot more job and career security. We learn new programming languages and tools (as our jobs, constantly evolving, require) on our own time, and sometimes sacrifice weekends to tackle production issues. They, on the other hand, can work 11-to-4, because their bosses care more about in-group cohesion than what gets done. We get pulled into meetings that we can’t say “no” to. They call those meetings. We live in fear of “Agile” processes being imposed in order to micromanage us. They decide how we are managed, and what we work on. We have little say in how our performance is evaluated. They define their jobs and, unless their assertions are utterly unreasonable, write their own performance reviews, easily justifying raises. We start struggling to get jobs at age 40. They hit their peak in their late 50s. It does seem unfair.

Why are “they” winning? It’s a lesson in collective intelligence. It really matters. We beat them, pound for pound, on individual intelligence– but that doesn’t matter. Often, we use our individual gifts to compete, stupidly, with each other. We need to stop being resentful of “business types”, and start learning from their superior collective intelligence. What are they doing that we aren’t? We’re working harder and we are smarter, individually speaking, but they are working smarter, which is what actually matters.

Look at a typical Harvard or Stanford business school class. Some of the students are brilliant, many are above-average, and some are depressingly mediocre. Perhaps 7 to 10 percent of them have the natural ability to do what a strong programmer can do, and yet almost all of us could do the 6- and 7-figure jobs that they move into. They’re not idiots by any stretch of the imagination, but only a few would be notably sharp by a capable programmer’s standard; some would be our peers, very few would be superior. In spite of this, because they pool resources and connections, they move along to high-status jobs earning $300,000 per year or more, and a number become CEOs earning an order of magnitude more. The ones who become the venture capitalists who run technology are, in fact, the runners-up; they’re the ones who didn’t get into stat-arb funds or private-equity firms or Fortune 500 CEO positions.

I’m not pointing this disparity out for the sake of denigrating anyone. In fact, it should become clear that I intend the opposite: it’s impressive what they’ve achieved as a collective, and we should learn from it. What is it about these people that has them outperforming us, as programmers and technologists, in terms of prestige, economic wealth, and general respect? Can you even imagine a Harvard MBA justifying his own working time in terms of two-week “sprints” or “iterations”? Of course not. It’s inconceivable. So what did they do right that we did wrong?

Rather than getting angry about their superior status and compensation, and our inferior lot, we might do better to observe what these people have done in order to get where they are. It seems to be a lesson in group power and intelligence. Politically speaking, a group of people who protect and assist each other, without question, will always do better than free agents acting alone, no matter how talented the latter group may be.

Like anyone else, I have my bitter moments. With my intelligence as well as the pedigree that would come from, say, Harvard Business School, I’d have an invincible combination. So perhaps I should have done that, instead, during the years that I spent goofing around in startups and tech companies. Had I taken that other route, I’d probably have my own company and be calling shots, instead of spending so much of my career in reaction to others’ suboptimal decisions.

I didn’t, and at 32, it’s probably too late to take that route. (An MBA doesn’t open the same doors when you’re old, and I’m a non-drinker.) When I was the right age for it, I was too idealistic. I spent my mid- and late 20s in a startup, and then Google, and then another startup, because “technology is the future”. That is true, of course. Technology is the future (or, at least, an integral part of it). However, an MBA gives you a future. As for 10 or 20 years in technology… I’m not so sure. With “Agile” and open-plan offices (both designed around the rapid hiring and facile control of cheap, fungible, talentless engineers) becoming the norm, I don’t think that expertise or technical excellence are valued anymore. I’d love to see this change, but it looks like an uphill battle.

To a large degree, though, I’m committed to the technical vision. As lucrative as it could be, were it still an option, I don’t want to be crashing third-world currencies for sport. I want to do something useful and important; that matters more to me than the difference between “comfortable” and “rich”. So, the fact that I’ve “lost” those 7-figure, world-hurting opportunities is hardly something to be upset about. What I don’t want to lose is the battle for the soul of technology.

We’ve let this industry be invaded by the worst kinds of psychopaths. Programming used to be an R&D job; now we have to deal with two-week “sprints” and “user stories” and, worst of all, the sorts of mediocre, mindlessly obedient colleagues who will put up with that nonsense (and for whom such processes are designed). That brings discomfort and humiliation as well as financial mediocrity. Writ large, it also makes us an underperforming industry. Innovation isn’t possible under “Scrum”.

The picture is bleak, but it doesn’t mean that we should abandon technology. It’s fucking important. It’s worth fighting for. What we do, as programmers, isn’t just the production of software. We solve problems. We throw ourselves, day after day, into a kind of work that requires precision of thought (a trait that not many people have developed). That makes us the lifeblood of an advanced economy. The status of our industry, therefore, is worth getting right. We ought to fix it.

The extremity to which we take our tribalism, however, does us no favors. We’ll have to work with people who aren’t like us in order to get anywhere. Don’t get me wrong: the upper ranks of our industry have been invaded by narcissists and psychopaths, and I want to drive every single one of them out. What I don’t like about us is our tendency to downgrade non-technical accomplishments. Our tribal antipathy toward “business people” is one example of this. I’ll admit that I’ve been guilty of this in the past. I’ve said a lot of negative things about “business guys” before but, as I get older, I recognize that there are a lot of intelligent, well-intended people who’ve dedicated their careers to solving business problems instead of programming ones. Pretending that we’re better, just because we can write code, does us no good. It only deprives us of possible allies, because we’ll learn a lot more and get farther if we can work with other people who have the skills we lack. Worse yet are the idiotic sectarian divisions we’ve let form within the programming community: “Agile” versus “enterprise”, tabs versus spaces, Ruby versus Java, women versus “brogrammers”, young versus old, Bay Area versus “the sticks”. We can’t afford this. These divisions make us weak and manipulable. Many of them don’t make any sense. The quarreling turns us into a low-status group of people. Instead, we need to unite around our shared interests.

Why do corporate executives get so much more respect, autonomy, and money than us, despite working significantly less than we do? Because they don’t let people divide them. It’s that simple. They protect each other, and this makes them collectively strong. A graduate of Harvard Business School will place his least favorite classmate in an executive role, for the good of the group; for a contrast, on our side, I still get shit on for stupid things that I said at Google, 5 years hence.

A friend of mine worked at a large bank with “360-degree” performance reviews. Ratings were on a 9-point scale with 5 as the intended average. The lowest-rated group was technology, where the average person got about a 6.5. The salespeople and executives and business analysts all gave each other high ratings, regardless of whether they thought each other “deserved” high scores, and so all of the “soft skills” departments came in with averages above 8.5. The programmers gave each other “honest” ratings, which shows collective stupidity, because it showed a willingness to rat each other out to management. The business people had the collective and political intelligence not to put bad scores, even when they were deserved, on the record. In most of the bank, people wrote reviews for themselves and had them signed by peers. There was a mutually beneficial (i.e. positive-sum) trade of credibility going on. Business people are smart enough to give criticism verbally, off the record, and to give glowing reviews when on the record. They recognize that preserving the credibility of the group is one of the most important things that they can do at work. Not so for programmers.

I can imagine that some programmers will read this and say that it proves that we are the honest ones, and that “business people” are corrupt in some way. I disagree completely with that viewpoint. They played politics effectively, and the programmers didn’t. End of story. The business people (traders, investment bankers, salespeople) saw that the executives were trying to collect data that could be used against them, and so they jammed the signal in order to minimize the probability of harm coming to anyone. The programmers played politics poorly, because they couldn’t keep their mouths shut, and probably got hit harder in the next round of layoffs.

This kind of “honesty”, shown by the programmers, I find to be non-virtuous. While I dislike deception, I also think that many forms of “honesty” are harmful, serving only to cause needless drama and friction. If I find a stranger unattractive, I am going to hide that reaction, because I’m not a jerk. What good would it do, to voice that opinion? None at all. Not all information deserves to be shared and some information should be concealed, even if using half-truths or (on occasion) lies. Likewise, I’m not going to sell out someone I perceive as a low performer unless he’s an existential threat to my career or to the company. If his only crime is to draw a salary of X and furnish work that could be hired at 0.9X, who cares? It’s not “honest” to rat him out to management as a below-average performer; it’s stupid. It would only make a mess.

Programmers are “honest” when it comes to reviewing each others’ performance (which is not to say that they’re correct) and it tends to have three effects. One is that it creates drama. The business people realized that if everyone got top ratings and glowing verbal feedback, the performance-review system could be rendered a formality and they could all get back to work. Problem solved: it becomes a once-a-year annoyance that takes up half an afternoon, rather than something that people fret and grouse about for months beforehand and afterward. The programmers, by being stupid and back-stabby, gave the performance reviews teeth, and created drama that lasted for a long time. The second is that it creates division. Numerical performance scores lead to false objectivity and that leads to permanent rifts within the company, as people withhold information and form alliances in order to game the system. Thirdly, it makes programmers look incompetent. If they’re getting the lowest ratings (because they’re slamming their peers, reports, and bosses) then the organization concludes that must be the worst people in the firm. They’ll never get respect. In the Philippines, this is called the crab mentality: if programmers worked together, they could escape their low professional status (the bucket); however, the constant fighting for slight, typically transient, individual superiorities keeps them all trapped.

Most doctors have peers whom they consider below-average in competence, of course. The same is true with lawyers. It’s also true of business executives, and professors, and everyone else. There’s no professional group in which there aren’t people who consider some of their colleagues to be idiots. Even still, they’ll never say such things about each other, in a context where it could hurt that person’s professional future, except in the case where someone is unethical or dangerous. If you’re seriously ill and have a team of doctors, they might not all like each other, but they’re not going to show it in front of you, the patient. Programmers, unfortunately, don’t have this sense of professional solidarity. They’ll call another programmer a “fucking idiot”, in front of a manager or executive, because they don’t like his indentation style. It’s so easy to turn us against each other, and we all suffer for it.

To continue the example above: on numerical performance scores, it’s obvious to everyone except for the programmers that those don’t exist for the employees’ benefit. (Programmers are too earnest, and actually believe the company when it says that such a management change exists for mutual benefit.) True high performers have no need or use for numerical scores, low performers (or those labelled as low performers) get fired, and the people in the middle get varying degrees of humiliation: being tagged as “good but not promotable” (70th-90th percentile) or “solidly mediocre” (30th-70th percentile) or “probably disposable” (10th-30th percentile). The salespeople and traders and equity analysts at this bank understood, when this numerical peer-review system was rolled out, that a game was being played against them, so they jammed the signal with positive reviews exchanged for other positive reviews and, collectively, won. They created a Lake Wobegon illusion: everyone was above average. The programmers didn’t. They ratted each other out to management– for nothing in return. Consequently, they lost. As a group, this left them with less autonomy and more managerial oversight, which meant more of that thing that programmers claim to hate: politics. We all know that corporate programming is extremely political and hate the fact, but few of us have insight into why it’s that way. It’s because we’re so bad at politics that political problems remain unsolved for years. If we simply learned how to get good at politics, we’d end up spending less time on it, and have more time and emotional energy for the work we enjoy: creating things.

It’s always best, for the workers, to leave management in the dark as to which people are the high and low performers. Obviously, low performers benefit from this. (“But who cares about them?”, asks every reader, “Since that couldn’t possibly be me.”) What’s less obvious is that high performers also benefit from opacity of the performance hierarchy. Why so? If management knows (or thinks it knows) exactly who the high performers are, it only needs to treat about 10 percent of the people being managed with any dignity. The middle 80 percent aren’t perceived to have the credibility that would make it risky to mistreat them. Making things miserable for the middle 80 percent isn’t management’s goal, but it can be an effective way of exerting leverage over the top 10 percent, who might find themselves thrown back into the mid-pack (by, say, being assigned less glamorous work) if they aren’t obedient. Companies want their middling employees to be unhappy with their status. Why? An analogy would be the decline, over the past 20 years, of service and comfort in economy-class airline seats. The airlines don’t do that because they hate economy-class passengers (they don’t care, either way, about them) but to sway those who can afford business- or first-class tickets, but who flew in lower classes in the 1990s when economy-class flight was more tolerable. Employers make it annoying to be a 50th- or 70th-percentile employee not because they hate their average people (again, they don’t care either way) but to put some fear into the top performers. However, if management is somewhat unsure about who is a top performer and who is not, this forces it to treat a larger number of employees with dignity as each might be a key player.

Furthermore, I’ve observed (from many angles) a number of terminations and what I’ve noticed is that it’s neither low performers nor high performers (“tallest nail”) who are most likely to get fired or laid off. Companies can adapt to consistent low performers, and they aren’t stupid enough to throw high performers overboard unless they burn out. Instead, people are most likely to be fired when there’s a change in performance level. Low-to-high is dangerous because shortfalls that were once blamed on inability, and forgotten, can be perceived to have been deliberate. It can open old wounds when someone who was once tolerated as the runt of the litter is now in contention for meaningful projects or promotions. High-to-low is even more dangerous, because it’s seen by management as an immediate risk to morale. If management sees a 90th-percentile worker drop to 60th, she’s still productive, but the fear of contagion will encourage management to isolate her and possibly to fire her. The fact that a 60th-percentile worker adds more business value than her salary costs doesn’t matter; management is more interested in holding up the masquerade of every employee working as hard as he possibly can, which is challenged when a former top performer falls into the mid-pack.

When someone drops from the 90th to the 60th percentile, that person’s literally below expectations. Literal-minded engineers are wont to turn on her, for that reason. This is, in my view, stupid. We all have fluctuations in performance. It’s going to happen to every single one of us: that experience of setting high (and unsustainable) expectations, until reality intrudes, is universal for people who try to do hard things. It’s a part of the human experience. I’d much rather help this person, protect her politically if I can, and have her as an ally. She’s already proven that she’s highly capable and, when she gets her groove back, I’d like to have her on my side. This is how MBAs work with each other, but it’s not how programmers think. This is precisely the collective intelligence that I’m talking about.

It is, I’d argue, often politically beneficial to protect a harmless low performer, even if you’re a high performer. Now, I’m not talking here about negative performers who cause problems, either due to incompetence or malevolence or stupidity. Those have to go. The good news is that the truly, incorrigibly toxic people are rare: maybe 1 out of 50, unless your company is absolutely terrible at hiring. When it comes to the productive but merely-below-average low performers, I have no problem with letting them stay. If they want to better themselves and become high performers, I’ll mentor them. If they don’t, then I’ll ignore them (as they desire). It’s not my company, and they’re not really hurting anyone as far as I can tell. They cost slightly more than one would pay to replace them, but they’re not going to sink the company. I don’t have an ethical issue with the company choosing to fire them, but I’m not going to help management do it; if I did, I would (a) gain nothing, (b) hurt someone, and (c) risk making a career-long enemy. I’d much rather protect that person and increase the odds that that person (who might become a high performer in another context) likes and protects me in the future. If the low performer is a lifelong loser, then it probably doesn’t matter if I rat him out to management. I won’t lose anything, but I won’t gain anything either. If, however, the low performer is a capable person having a bad year, then I make an enemy, for the benefit of a company that would toss me overboard as soon as it thought that I was a “low performer”. No thanks; I’ll protect his secrets.

If technologists protected each other and worked together, in the same way that executives have, then we could change the world not only for our benefit, but for the world’s benefit. Right now, we work so much harder than the caste of private-sector social climbers called “executives”, and get so much less in return. We’ve lost our own industry. If the right people were in charge of our industry, we’d build better technologies and solve more interesting problems and, to boot, make a lot more money.

I recognize that the statement above is a bold one. In fact, it’s hard to believe. The world is increasingly taking stock of Silicon Valley and not liking what it sees. It sees sexism, because VCs fund unqualified, narcissistic young founders based on connections and “gut instinct” instead of talent and capability. That’s not a part of our culture as technologists, but of one that’s been imposed upon us. It sees San Francisco gentrification, because the VCs fund no-remote monoculture shops almost exclusively, forcing people to live in that already-congested city, even though most have no other reason to want to be there. It sees spoiled engineers building products to solve the problems of prematurely affluent goofballs instead of rolling their sleeves up to work on things that matter. That’s what the rest of the U.S. is beginning to think that “we”, as technologists, are. Our reputation is being lost, because our industry is run by people who diverge not only from our values as technologists, but from common decency. When the public sees billionaire venture capitalists who believe women shouldn’t vote or founders’ callous, offensive rants about San Francisco’s working-class population, that’s what the rest of the world thinks we are. We can fix this, but we have to take charge of our industry.

We know who are enemies are. The Silicon Valley elite must be discarded. They have damaged our industry and slowed innovation and, with their antics, are putting our reputation at risk, too. What we need to be better at is making more friends outside of our silly tribal boundaries. For example, we shouldn’t be so quick to reject “business people” as a tribe or group. Rather than resent them for what they have accomplished, as a collective, we ought to learn from it. They’ve been pooling connections and resources while we’ve been drawing lines in the sand by saying “I won’t work with Java programmers” or “I refuse to hire anyone who doesn’t want to do Scrum”. If we had done what they were doing– putting aside differences and working together– then we might actually have a technology industry that reflects our values. More to the point, we’d get more respect and more money, as well, if we could get past petty differences and work together in an intelligent way.

The past ten years have taught me that individual intelligence is very overrated. It shouldn’t have surprised me. Organizational and collective intelligence are far more important, and we as programmers ought to develop them. The quickest way to do this is to reach out to people who already have successfully organized around their own interests. Why do Harvard Business School graduates end up running the business world? They work together, they pool resources and connections, and their first impulse is to cooperate rather than compete. We need to learn this from them, and fast. We’re losing our industry, and it’s time to win it back.