XWP vs. JAP

The software industry is a fascinating place. As programmers, we have the best and worst job in the world. What we do is so rewarding and challenging that many of us have been doing it for free since we were eight. We’re paid to think, and to put pure logic into systems that effectively do our bidding. And yet, we have (as a group) extremely low job satisfaction. We don’t last long: our half-life is about six years. By 30, most of us have decided that we want to do something else: management, quantitative finance, or startup entrepreneurship. It’s not programming itself that drives us out, but the ways in which the “programmer” job has been restructured out of our favor. It’s been shaved down, mashed, melted and molded into commodity grunt work, except for the top 2 percent or so of our field (for whom as much time is spent establishing that one is, in fact, in the top 2 percent, as is spent working). Most of us have to sit in endless meetings, follow orders that make no sense, and maintain legacy code with profanity such as “VisitorFactoryFactory” littered about, until we move “into management”, often landing in a role that is just as tedious but carries (slightly) more respect.

I’m reaching a conclusion, and it’s not a pleasant one, about our industry and what one has to do to survive it. My definition of “survive” entails progress, because while it’s relatively easy to coast, engineers who plateau are just waiting to get laid off, and will usually find that demand for their (increasingly out of date) skills has declined. Plainly put, there’s a decision that programmers have to make if they want to get better. Why? Because you only get better if you get good projects, and you only get good projects if you know how to play the game. Last winter, I examined the trajectory of software engineers, and why it seems to flat-line so early. The conclusion I’ve come to is that there are several ceilings, three of which seem visible and obvious, and each requires a certain knack to get past it. Around 0.7 to 0.8 there’s the “weed out” effect that’s  rooted in intellectual limitations: inability to grasp pointers, recursion, data in sets, or other intellectual concepts people need to understand if they’re going to be adequate programmers. Most people who hit this ceiling do so in school, and one hopes they don’t become programmers. The next ceiling, which is where the archetypical “5:01″ mediocrities live, is around 1.2. This is where you finish up if you just follow orders, don’t care about “functional programming” because you can’t see how it could possibly apply to your job, and generally avoid programming outside of an office context.

The next ceiling is around 1.8, and it’s what I intend to discuss. The 0.7 ceiling is a problem of inability, and at 1.2 it’s an issue of desire and willingness. There are a lot of programmers who don’t have a strong desire to get any better than average. Average is employable, middle-class, comfortable. That keeps a lot of people complacent around 1.2. The ceiling at 1.8, on the other hand, comes from the fact that it’s genuinely hard to get allocated 1.8+ level work, which usually involves technical and architectural leadership. In most companies, there are political battles that the projects’ originators must fight to get them on the map, and others that engineers must involve themselves in if they want to get on to the best projects. It’s messy and hard and ugly and it’s the kind of process that most engineers hate.

Many engineers at the 1.7 to 1.8 level give up on engineering progress and take this ceiling as a call to move into management. It’s a lot harder to ensure a stream of genuinely interesting work than it is to take a middle management position. The dream is that the managerial position will allow the engineer to allocate the best technical work to himself and delegate the crap. The reality is that he’s lucky if he gets 10 hours per week of coding time in, and that managers who cherry-pick the good work and leave the rest to their subordinates are often despised and therefore ineffective.

This said, there’s an idea here, and it deserves attention. The sudden desire to move into management occurs when engineers realize that they won’t progress by just doing their assigned work, and that they need to hack the project allocation process if they want to keep getting better. Managerial authority seems like the most direct route to this because, after all, it’s managers who assign the projects. The problem with that approach is that managerial work requires an entirely different skill set, and that while this is a valid career, it’s probably not what one should pursue if one wants to get better as a software engineer.

How does one hack project allocation? I’m going to introduce a couple terms. The first is J.A.P.: “Just A Programmer”. There are a lot of people in business who see programming as commodity work: that’s why most of our jobs suck. This is a self-perpetuating cycle: because of such peoples’ attitudes toward programmers, good engineers leave them, leaving them with the bad, and reinforcing their perception that programming is order-following grunt work that needs to be micromanaged or it won’t be done right at all. Their attitude toward the software engineer is that she’s “just a programmer”. Hence the term. There’s a related cliche in the startup world involving MBA-toting “big-picture guys” who “just need a programmer” to do all the technical work in exchange for a tiny sliver of the equity. What they get, in return, is rarely quality.

Worse yet for the business side, commodity programmers aren’t 90 or 70 or 50 percent as valuable as good engineers, but 5 to 10 percent as useful, if that. The major reason for this is that software projects scale horribly in terms of the number of people involved with them. A mediocre engineer might be 20 percent as effective, measured individually, as a good one, but four mediocre engineers will only be about 35 percent (not 100) as effective as a single good engineer.

Good programmers dread the “Just A Programmer” role in which they’re assessed on the quantity of code they crank out rather than the problems they solve and the quality of their solutions, and they avoid such positions especially because commodity-programmer roles tend to attract ineffective programmers, and effective people who have to work with ineffective programmers become, themselves, ineffective.

This said, a 1.8 engineer is not a “commodity programmer”. At this level, we’re talking about people who are probably in the top 2 or 3 percent of the software industry. We’re talking about people who, in a functioning environment, will deliver high-quality and far-reaching software solutions reliably. They can start from scratch and deliver an excellent “full-stack” solution. (In a dysfunctional environment, they’ll probably fail if they don’t quit first.)  The political difficulty, and it can be extreme, lies with the fact that it’s very difficult for a good engineer to reliably establish (especially to non-technical managers, and to those managers’ favorites who may not be technically strong) that she is good. It turns out that, even if it’s true, you can’t say to your boss, “I’m a top-2% engineer and deserve more autonomy and the best projects” and expect good results. You have to show it, but you can’t show it unless you get good projects.

What this means, in fewer words, is that it’s very difficult for a software engineer to prove he’s not a commodity programmer without hacking the politics. Perversely, many software environments can get into a state where engineering skill becomes negatively correlated with political success. For example, if the coding practices are “best practices”, “design pattern”-ridden Java culture, with FactoryVisitorSingletonSelection patterns all over the place, bad engineers have an advantage on account of being more familiar with damaged software environments, and because singleton directories called “com” don’t piss them off as much (since they never venture outside of an IDE anyway).

Software wants to be a meritocracy, but the sad reality is that effectiveness of an individual programmer depends on the environment. Drop a 1.8+ engineer into a Visitor-infested Java codebase and he turns into a bumbling idiot, in the same way that an incompetent player at a poker table can fluster experts (who may not be familiar with that particular flavor of incompetence). The result of this is that detecting who the good programmers are, especially for a non-programmer or an inept one, is extremely difficult, if not impossible. The 1.7 to 1.8 level is where software engineers realize that, in spite of their skill, they won’t be recognized as having it unless they can ensure a favorable environment and project allocation, and that it’s next to impossible to guarantee these benefits in the very long run without some kind of political advantage. Credibility as a software engineer alone won’t cut it, because you can’t establish that creativity unless you get good projects.

Enter the “X.W.P.” distinction, which is the alternative to being a “J.A.P.” It means an “X Who Programs”, where X might be an entrepreneur, a researcher, a data scientist, a security specialist, a quant, or possibly even a manager. If you’re an XWP, you can program, and quite possibly full-time, but you have an additional credibility that is rooted in something other than software engineering. Your work clearly isn’t commodity work; you might have a boss, but he doesn’t believe he could do your job better than you can. XWP is the way out. But you also get to code, so it’s the best of both worlds.

This might seem perverse and unusual. At 1.8, the best way to continue improving as a software engineer is not to study software engineering. You might feel like there’s still a lot to learn in that department, and you’re right, but loading up on unrecognized skill is not going to get you anywhere. It leads to bitterness and slow decline. You need something else.

One might think that an XWP is likely to grow as an X but not as a software engineer, but I don’t think that’s necessarily true. There certainly are quants and data scientists and entrepreneurs and game designers who remain mediocre programmers, but they don’t have to. If they want to become good engineers, they have an advantage over vanilla software engineers on account of the enhanced respect accorded their role. If a Chief Data Scientist decides that building a distributed system is the best way to solve a machine learning problem, and he’s willing to roll his sleeves up and write the code, the respect that this gives him will allow him to take the most interesting engineering work. This is how you get 1.8 and 2.1 and 2.4-level engineering work. You start to bill yourself as something other than a software engineer and get the respect that entitles you to projects that will make you better. You find an X and become an X, but you also know your way around a computer. You’re an X, and you know how to code, and your “secret weapon” (secret because management in most companies won’t recognize it) is that you’re really good at it, too.

This, perhaps, is the biggest surprise I’ve encountered in the bizarre world that is the software engineering career. I am so much without words that I’ll use someone else’s, from A Song of Ice and Fire: To go west, you must first go east.

What is spaghetti code?

One of the easiest ways for an epithet to lose its value is for it to become over-broad, which causes it to mean little more than “I don’t like this”. Case in point is the term, “spaghetti code”, which people often use interchangeably with “bad code”. The problem is that not all bad code is spaghetti code. Spaghetti code is an especially virulent but specific kind of bad code, and its particular badness is instructive in how we develop software. Why? Because individual people rarely write spaghetti code on their own. Rather, certain styles of development process make it increasingly common as time passes. In order to assess this, it’s important first to address the original context in which “spaghetti code” was defined: the dreaded (and mostly archaic) goto statement.

The goto statement is a simple and powerful control flow mechanism: jump to another point in the code. It’s what a compiled assembly program actually does in order to transfer control, even if the source code is written using more modern structures like loops and functions. Using goto, one can implement whatever control flows one needs. We also generally agree, in 2012, that goto is flat-out inappropriate for source code in most modern programs. Exceptions to this policy exist, but they’re extremely rare. Most modern languages don’t even have it.

Goto statements can make it difficult to reason about code, because if control can bounce about a program, one cannot make guarantees about what state a program is in when it executes a specific piece of code. Goto-based programs can’t easily be broken down into component pieces, because any point in the code can be wormholed to any other. Instead, they devolve into an “everything is everywhere” mess where to understand a piece of the program requires understanding all of it, and the latter becomes flat-out impossible for large programs. Hence the comparison to spaghetti, where following one thread (or noodle) often involves navigating through a large tangle of pasta. You can’t look at a bowl of noodles and see which end connects to which. You’d have to laboriously untangle it.

Spaghetti code is code where “everything is everywhere”, and in which answering simple questions such as (a) where a certain piece of functionality is implemented, (b) determining where an object is instantiated and how to create it, and (c) assessing a critical section for correctness, just to name a few examples of questions one might want to ask about code, require understanding the whole program, because of the relentless pinging about the source code that answer simple questions requires. It’s code that is incomprehensible unless one has the discipline to follow each noodle through from one side to the other. That is spaghetti code.

What makes spaghetti code dangerous is that it, unlike other species of bad code, seems to be a common byproduct of software entropy. If code is properly modular but some modules are of low quality, people will fix the bad components if those are important to them. Bad or failed or buggy or slow implementations can be replaced with correct ones while using the same interface. It’s also, frankly, just much easier to define correctness (which one must do in order to have a firm sense of what “a bug” is) over small, independent functions than over a giant codeball designed to do too much stuff. Spaghetti code is evil because (a) it’s a very common subcase of bad code, (b) it’s almost impossible to fix without causing changes in functionality, which will be treated as breakage if people depend on the old behavior (potentially by abusing “sleep” methods, thus letting a performance improvement cause seemingly unrelated bugs!) and (c) it seems, for reasons I’ll get to later, not to be preventable through typical review processes.

The reason I consider it important to differentiate spaghetti code from the superset, “bad code”, is that I think a lot of what makes “bad code” is subjective. A lot of the conflict and flat-out incivility in software collaboration (or the lack thereof) seems to result from the predominantly male tendency to lash out in the face of unskilled creativity (or a perception of such, and in code this is often an extremely biased perception): to beat the pretender to alpha status so badly that he stops pestering us with his incompetent displays. The problem with this behavior pattern is that, well, it’s not useful and it rarely makes people better at what they’re trying to do. It’s just being a prick. There are also a lot of anal-retentive wankbaskets out there who define good and bad programmers based on cosmetic traits so that their definition of “good code” is “code that looks like I wrote it”. I feel like the spaghetti code problem is better-defined in scope than the larger but more subjective problem of “bad code”. We’ll never agree on tabs-versus-spaces, but we all know that spaghetti code is incomprehensible and useless. Moreover, as spaghetti code is an especially common and damaging case of bad code, assessing causes and preventions for this subtype may be generalizable to other categories of bad code.

People usually use “bad code” to mean “ugly code”, but if it’s possible to determine why a piece of code is bad and ugly, and to figure out a plausible fix, it’s already better than most spaghetti code. Spaghetti code is incomprehensible and often unfixable. If you know why you hate a piece of code, it’s already above spaghetti code in quality, since the latter is just featureless gibberish.

What causes spaghetti code? Goto statements were the leading cause of spaghetti code at one time, but goto has fallen so far out of favor that it’s a non-concern. Now the culprit is something else entirely: the modern bastardization of object-oriented programming. Inheritance is an especially bad culprit, and so is premature abstraction: using a parameterized generic with only one use case in mind, or adding unnecessary parameters. I recognize that this claim– that OOP as practiced is spaghetti code– is not a viewpoint without controversy. Nor was it without controversy, at one time, that goto was considered harmful.

One of the biggest problems in comparative software (that is, the art of comparing approaches, techniques, languages, or platforms) is that most comparisons focus on simple examples. At 20 lines of code, almost nothing shows its evilness, unless it’s contrived to be dastardly. A 20-line program written with goto will usually be quite comprehensible, and might even be easier to reason about than the same program written without goto. At 20 lines, a step-by-step instruction list with some explicit control transfer is a very natural way to envision a program. For a static program (i.e. a platonic form that need never be changed and incurs no maintenance) that can be read in one sitting, that might be a fine way to structure it. At 20,000 lines, the goto-driven program becomes incomprehensible. At 20,000 lines, the goto-driven program has been hacked and expanded and tweaked so many times that the original vision holding the thing together has vanished, and the fact that a program can be in a piece of code “from anywhere” means that to safely modify the code requires confidence quantified by “from everywhere”. Everything is everywhere. Not only does this make the code difficult to comprehend, but it means that every modification to the code is likely to make it worse, due to unforeseeable chained consequences. Over time, the software becomes “biological”, by which I mean that it develops behaviors that no one intended but that other software components may depend on in hidden ways.

Goto failed, as a programming language construct, because of these problems imposed by the unrestricted pinging about a program that it created. Less powerful, but therefore more specifically targeted, structures such as procedures, functions, and well-defined data structures came into favor. For the one case where people needed global control flow transfer (error handling) exceptions were developed. This was a progress from the extreme universality and abstraction of a goto-driven program to the concretion and specificity of pieces (such as procedures) solving specific problems. In unstructured programming, you can write a Big Program that does all kinds of stuff, add features on a whim, and alter the flow of the thing as you wish. It doesn’t have to solve “a problem” (so pedestrian…) but it can be a meta-framework with an embedded interpreter! Structured programming encouraged people to factor their programs into specific pieces that solved single problems, and to make those solutions reusable when possible. It was a precursor of the Unix philosophy (do one thing and do it well) and functional programming (make it easy to define precise, mathematical semantics by eschewing global state).

Another thing I’ll say about goto is that it’s rarely needed as a language-level primitive. One could achieve the same effect using a while-loop, a “program counter” variable defined outside that loop that the loop either increments (step) or resets (goto) and a switch-case statement using it. This could, if one wished, be expanded into a giant program that runs as one such loop, but code like this is never written. What the fact that this is almost never done seems to indicate is that goto is rarely needed. Structured programming thereby points out the insanity of what one is doing when attempting severely non-local control flows.

Still, there was a time when abandoning goto was extremely controversial, and this structured programming idea seemed like faddish nonsense. The objection was: why use functions and procedures when goto is strictly more powerful?

Analogously, why use referentially transparent functions and immutable records when objects are strictly more powerful? An object, after all, can have a method called run or call or apply so it can be a function. It can also have static, constant fields only and be a record. But it can also do a lot more: it can have initializers and finalizers and open recursion and fifty methods if one so chooses. So what’s the fuss about this functional programming nonsense that expects people to build their programs out of things that are much less powerful, like records whose fields never change and whose classes contain no initialization magic?

The answer is that power is not always good. Power, in programming, often advantages the “writer” of code and not the reader, but maintenance (i.e. the need to read code) begins subjectively around 2000 lines or 6 weeks, and objectively once there is more than one developer on a project. On real systems, no one gets to be just a “writer” of code. We’re readers, of our own code and of that written by others. Unreadable code is just not acceptable, and only accepted because there is so much of it and because “best practices” object-oriented programming, as deployed at many software companies, seem to produce it. A more “powerful” abstraction is more general, and therefore less specific, and this means that it’s harder to determine exactly what it’s used for when one has to read the code using it. This is bad enough, but single-writer code usually remains fairly disciplined: the powerful abstraction might have 18 plausible uses, but only one of those is actually used. There’s a singular vision (although usually an undocumented one) that prevents the confusion. The danger sets in when others who are not aware of that vision have to modify the code. Often, their modifications are hacks that implicitly assume one of the other 17 use cases. This, naturally, leads to inconsistencies and those usually result in bugs. Unfortunately, people brought in to fix these bugs have even less clarity about the original vision behind the code, and their modifications are often equally hackish. Spot fixes may occur, but the overall quality of the code declines. This is the spaghettification process. No one ever sits down to write himself a bowl of spaghetti code. It happens through a gradual “stretching” process and there are almost always multiple developers responsible. In software, “slippery slopes” are real and the slippage can occur rapidly.

Object-oriented programming, originally designed to prevent spaghetti code, has become (through a “design pattern” ridden misunderstanding of it) one of the worst sources of it. An “object” can mix code and data freely and conform to any number of interfaces, while a class can be subclassed freely about the program. There’s a lot of power in object-oriented programming, and when used with discipline, it can be very effective. But most programmers don’t handle it well, and it seems to turn to spaghetti over time.

One of the problems with spaghetti code is that it forms incrementally, which makes it hard to catch in code review, because each change that leads to “spaghettification” seems, on balance, to be a net positive. The plus is that a change that a manager or customer “needs yesterday” gets in, and the drawback is what looks like a moderate amount of added complexity. Even in the Dark Ages of goto, no one ever sat down and said, “I’m going to write an incomprehensible program with 40 goto statements flowing into the same point.”  The clutter accumulated gradually, while the program’s ownership transferred from one person to another. The same is true of object-oriented spaghetti. There’s no specific point of transition from an original clean design to incomprehensible spaghetti. It happens over time as people abuse the power of object-oriented programming to push through hacks that would make no sense to them if they understood the program they were modifying and if more specific (again, less powerful) abstractions were used. Of course, this also means that fault for spaghettification is everywhere and nowhere at the same time: any individual developer can make a convincing case that his changes weren’t the ones that caused the source code to go to hell. This is part of why large-program software shops (as opposed to small-program Unix philosophy environments) tend to have such vicious politics: no one knows who’s actually at fault for anything.

Incremental code review is great at catching the obvious bad practices, like mixing tabs and spaces, bad variable naming practices, and lines that are too long. That’s why the more cosmetic aspects of “bad code” are less interesting (using a definition of “interesting” synonymous with “worrisome”) than spaghetti code. We already know how to solve them in incremental code review. We can even configure our continuous-integration servers to reject such code. As for spaghetti code, where there is no clear definition, this is difficult if not impossible to do. Whole-program review is necessary to catch that, but I’ve seen very few companies willing to invest the time and political will necessary to have actionable whole-program reviews. Over the long term (10+ years) I think it’s next to impossible, except among teams writing life- or mission-critical software, to ensure this high level of discipline in perpetuity.

The answer, I think, is that Big Code just doesn’t work. Dynamic typing falls down in large programs, but static typing fails in a different way. The same is true of object-oriented programming, imperative programming, and to a lesser but still noticeable degree (manifest in the increasing number of threaded state parameters) in functional programming. The problem with “goto” wasn’t that goto was inherently evil, so much as that it allowed code to become Big Code very quickly (i.e. the threshold of incomprehensible “bigness” grew smaller). On the other hand, the frigid-earth reality of Big Code is that there’s “no silver bullet”. Large programs just become incomprehensible. Complexity and bigness aren’t “sometimes undesirable”. They’re always dangerous. Steve Yegge got this one right.

This is why I believe the Unix philosophy is inherently right: programs shouldn’t be vague, squishy things that grow in scope over time and are never really finished. A program should do one thing and do it well. If it becomes large and unwieldy, it’s refactored into pieces: libraries and scripts and compiled executables and data. Ambitious software projects shouldn’t be structured as all-or-nothing single programs, because every programming paradigm and toolset breaks down horribly on those. Instead, such projects should be structured as systems and given the respect typically given to such. This means that attention is paid to fault-tolerance, interchangeability of parts, and communication protocols. It requires more discipline than the haphazard sprawl of big-program development, but it’s worth it. In addition to the obvious advantages inherent in cleaner, more usable code, another benefit is that people actually read code, rather than hacking it as-needed and without understanding what they’re doing. This means that they get better as developers over time, and code quality gets better in the long run.

Ironically, object-oriented programming was originally intended to encourage something looking like small-program development. The original vision behind object-oriented programming was not that people should go and write enormous, complex objects, but that they should use object-oriented discipline when complexity is inevitable. An example of success in this arena is in databases. People demand so much of relational databases in terms of transactional integrity, durability, availability, concurrency and performance that complexity is outright necessary. Databases are complex beasts, and I’ll comment that it has taken the computing world literally decades to get them decent, even with enormous financial incentives to do so. But while a database can be (by necessity) complex, the interface to one (SQL) is much simpler. You don’t usually tell a database what search strategy to use; you write a declarative SELECT statement (describing what the user wants, not how to get it) and let the query optimizer take care of it. 

Databases, I’ll note, are somewhat of an exception to my dislike of Big Code. Their complexity is well-understood as necessary, and there are people willing to devote their careers entirely to mastering it. But people should not have to devote their careers to understanding a typical business application. And they won’t. They’ll leave, accelerating the slide into spaghettification as the code changes hands.

Why Big Code? Why does it exist, in spite of its pitfalls? And why do programmers so quickly break out the object-oriented toolset without asking first if the power and complexity are needed? I think there are several reasons. One is laziness: people would rather learn one set of general-purpose abstractions than study the specific ones and when they are appropriate. Why should anyone learn about linked lists and arrays and all those weird tree structures when we already have ArrayList? Why learn how to program using referentially transparent functions when objects can do the trick (and so much more)? Why learn how to use the command line when modern IDEs can protect you from ever seeing the damn thing? Why learn more than one language when Java is already Turing-complete? Big Code comes from a similar attitude: why break a program down into small modules when modern compilers can easily handle hundreds of thousands of lines of code? Computers don’t care if they’re forced to contend with Big Code, so why should we?

However, more to the point of this, I think, is hubris with a smattering of greed. Big Code comes from a belief that a programming project will be so important and successful that people will just swallow the complexity– the idea that one’s own DSL is going to be as monumental as C or SQL. It also comes from a lack of willingness to declare a problem solved and a program finished even when the meaningful work is complete. It also comes from a misconception about what programming is. Rather than existing to solve well-defined problems and then get out of the way, as small-program methodology programs do, they become more than that. Big Code projects often have an overarching and usually impractical “vision” that involves generating software for software’s sake. This becomes a mess, because “vision” in a corporate environment is usually bike-shedding that quickly becomes political. Big Code programs always reflect the political environment that generated them (Conway’s Law) and this means that they invariably look more like collections of parochialisms and inside humor than the more universal languages of mathematics and computer science.

There is another problem in play. Managers love Big Code, because when the programmer-to-program relationship is many-to-one instead of one-to-many, efforts can be tracked and “headcount” can be allocated. Small-program methodology is superior, but it requires trusting the programmers to allocate their time appropriately to more than one problem, and most executive tyrannosaurs aren’t comfortable doing that. Big Code doesn’t actually work, but it gives managers a sense of control over the allocation of technical effort. It also plays into the conflation of bigness and success that managers often make (cf. the interview question for executives, “How many direct reports did you have?”) The long-term spaghettification that results from Big Code is rarely an issue for such managers. They can’t see it happen, and they’re typically promoted away from the project before this becomes an issue.

In sum, spaghetti code is bad code, but not all bad code is spaghetti. Spaghetti is a byproduct of industrial programming that is usually, but not always, an entropic result of too many hands passing over code, and an inevitable outcome of large-program methodologies and the bastardization of “object-oriented programming” that has emerged out of these defective, executive-friendly processes. The antidote to spaghetti is an aggressive and proactive refactoring effort focused on keeping programs small, effective, clean in source code, and most of all, coherent.

Don’t look now, but Valve just humiliated your “corporate culture”.

The game company Valve has gotten a lot of press recently for, among other things, its unusual corporate culture in which employees are free to move to whatever project they choose. There’s no “transfer process” to go through when an employee decides to move to another team. They just move. This is symbolized by placing wheels under each desk. People are free to move as they are capable. Employees are trusted with their time and energy. And it works.

Surely this can’t work for larger companies, can it? Actually, I’d argue that Valve has found the only solution that actually works. When companies trust their employees to self-organize and allocate their time as they will, the way to make sure that unpleasant but important work gets done is to provide an incentive: a leadership position or a promotion or a bonus to the person who rolls up her sleeves and solves this problem. That’s “expensive”, but it actually works. (Unpleasant and unimportant projects don’t get done, as they shouldn’t.) The more traditional, managerial alternative is to “assign” someone to that project and make it hard for her to transfer until some amount of time has been “served” on the shitty project. There’s a problem. First, the quality of the work done when someone tells a newcomer, “Complete this or I’ll fire you” is just not nearly as good as the work you get when you tell someone competent, “This project will be unpleasant but it will lock in your next promotion.” Second, it tends to produce mediocrity. The best people have better options than to pay 18 months of dues before having the option (not a guarantee, but the right to apply) to transfer to something better, so the ones who remain and suffer tend to be less talented. Good people are always going to move around in search of the best learning opportunity (that’s how they got to be good) so it’s counterproductive to force them into external transfer with a policy that makes it a lot harder to get onto a decent project than to get hired.

Valve actually Solved It with the elegance of a one-line mathematical proof. They’ve won the cultural battle. The wheels under the desk are a beautiful symbol, as well: an awesome fuck-you to every company that thinks providing a Foosball table constitutes a “corporate culture” worth giving a shit about. They’ve also, by demonstration of an alternative, shown a generation of technology workers how terrible their more typical, micromanaged jobs are. What good is an array of cheap perks (8:30 pm pizza!) if people aren’t trusted to choose what to work on and direct their own careers?

I think, however, that Valve’s under-desk wheels have solved an even deeper problem. What causes corporations to decay? A lot of things, but hiring and firing come to mind. Hiring the wrong people is toxic, but there are very few objectively bad employees. Mostly, it comes down to project/person fit. Why not maximize the chance of a successful fit by letting the employee drive? Next, firing. It always does damage. Even when there is no question that it’s the right decision to fire someone, it has undesirable cultural side effects. Taking a short-term, first-order perspective, most companies could, in theory, become more productive if they fired the bottom 30 percent of their workforce. In practice, virtually no company would do this. It would be cultural suicide, and the actual effect on productivity would be catastrophic. So very companies execute 30 percent layoffs lightly. Nonetheless, good people do get fired, and it’s not a rare occasion, and it’s incredibly damaging. What drives this? Enter the Welch Effect, named for Jack Welch, inventor of stack-ranking, which answers the question of, “Who is most likely to get fired when a company has to cut people?” The answer: junior people on macroscopically underperforming teams. Why is this suboptimal? Because the junior members of this team are the ones least responsible for its underperformance.

Companies generally allocate bonuses and raises by having the CEO or board divide a pool of money among departments for the VPs to further subdivide, and so on, with leaf-level employees at the end of this propagation chain. The same process is usually used for layoffs, which means that an employee’s chance of getting nicked is a function of his team’s macroscopic performance, rather than her individual capability. Junior employees, who rarely make the kinds of major decisions that would result in macroscopic team underperformance, still tend to be the first ones to go. They don’t have the connections within the company or open transfer opportunities that would protect them. It’s not firing “from the top” or the middle or the bottom. It’s firing mostly new people randomly, which destroys the culture rapidly. Once people see a colleague unfairly fired, they tend to distrust that there’s any fairness in the company at all.

Wheels under the desk, in addiction to creating a corporate culture actually worth caring about, eliminate the Welch Effect. This inspires people to genuinely work hard and energetically, and makes bad hires and firings rare and transfer battles nonexistent.

Moreover, the Valve way of doing things is the only way, for white-collar work at least, that actually makes sense. Where did companies get the idea that, before anyone is allowed to spend time on it, a project needs to be “allocated” “head count” by a bunch of people who don’t even understand the work being done? I don’t know where that idea comes from, but its time is clearly long past over.

4 things you should probably never do at work.

I don’t like lists, and I don’t really like career advice, because both tend to play on peoples’ need for simple answers and to have obvious advice thrown at them telling them what they already know. But here we go. I hope that in addition to these items, readers will be patient enough to find the connecting theme, which I’ll reveal at the end. Here are 4 things one should never do at work. I say this not from a position of moral superiority, having made most of these mistakes in the past, but with the intention of relaying some counterintuitive observations about work and what not to do there, and why not.

1. Seemingly harmless “goofing off”. I’m talking about Farmville and Facebook and CNN and Reddit Politics and possibly even Hacker News. You know, that 21st-century institution of at-work web-surfing. It’s the reason no decent website publishes timestamps on social interaction, instead preferring intervals such as “3 days ago”; being run by decent human beings, they don’t want to “play cop” against everyday bored workers.

I don’t think it’s wise to “goof off” at work. That’s not because I think people are at risk of getting caught (if goofing off were treated as a fireable offense, nearly the whole country would be unemployed). Direct professional consequences for harmless time-wasting are pretty rare, unless it reaches the point (3+ hours per day) where it’s visibly leading to unacceptable work performance. Moreover, I say this not because I’m some apologist trying to encourage people to be good little corporate citizens. That couldn’t be farther from the truth. There’s a counterintuitive and entirely selfish reason why wasting time on the clock is a bad idea: it makes the time-wasters unhappy.

Yes, unhappy. People with boring jobs think that their web-surfing makes their work lives more bearable, but it’s not true. The distractions are often attractive for the first few minutes, but end up being more boring than the work itself.

Here’s the secret of work for most people: it’s not that boring. Completely rote tasks have been automated away, or will be soon. Most people aren’t bored at work because the work is intrinsically boring, but because the low-level but chronic social anxiety inflicted by the office environment (and the subordinate context, which makes everything less enjoyable) impairs concentration and engagement just enough to make the mundane-but-not-really-boring tasks ennui-inducing. It’s not work that makes people unhappy, but the environment.

Working from home is a solution for some people, but if there isn’t pre-existing trust and a positive relationship with manager, this can cause as many problems as it solves. In the age of telecommunications, “the environment” is not defined by the Euclidean metric. It’s the power relationships more than the noise and crowding that make most work environments so toxic, and those don’t go away just because of a physical separation– not in the age of telecommunications.

I read once about a study where people were expected to read material amid low-level stressors and distractions and they attributed their poor performance not to the environment but to the material being “boring”, while control-group subjects (who comprehended the material well) found it interesting. In other words, the subjects who suffered a subliminally annoying office-like environment attributed their lack of focus to “boring” material, when there was no basis for that judgment. They misattributed the problem because the environment wasn’t quite annoying enough to be noticeably distracting. The same thing happens at probably 90 percent of jobs out there. People think it’s the work that bores them, but it’s the awful environment making it hard to concentrate that bores them. Unfortunately, ergonomic consultants and lighting specialists aren’t going to solve this environmental problem. The real problem is the power relationships, and the only long-term solution is for the worker to become so good at what she does as to lose any fear of getting fired– but this takes time, and a hell of a lot of work. No one gets to that point from Farmville.

How does Internet goofing-off play into this? Well, it’s also boring, but in a different way, because there’s no desire to perform. No one actually cares about Reddit karma in the same way they care about getting promoted and not getting fired. This reprieve makes the alternative activity initially attractive, but the unpleasant and stressful environment remains exactly as it was, so boredom sets in again– only a few minutes into the new activity. So a person goes from being bored and minimally productive to being bored and unproductive, which leads to a stress spike come standup time (standup: the poker game where you cannot fold; if you don’t have cards you must bluff) which leads to further low productivity.

Also, actually working (when one is able to do so) speeds up the day. Typical work is interesting enough that a person who becomes engaged in it will notice time passing faster. The stubborn creep of the hours turns rapid. It’s Murphy’s Law: once there’s something in front of you that you actually care about getting done, time will fucking move.

People who fuck around on the Internet at work are lengthening (subjectively speaking) their workdays considerably. Which means they absorb an enhanced dose of office stress, and worse “spikes” of stress out of the fear of being discovered wasting so much time. Since it’s the social anxiety and not the actual work that is making them so fucking miserable, this is a fool’s bargain.

Don’t waste time at work. This isn’t a moral imperative, because I don’t give a shit whether people I’ve never met fuck off at their jobs. It’s practical advice. Doing what “The Man” wants may be selling your soul, but when you subject yourself to 8 hours of low-grade social anxiety whilst doing even more pointless non-work, you’re shoving your soul into a pencil sharpener.

2. Working on side projects. The first point pertains to something everyone has experienced: boredom at work. Even the best jobs have boring components and long days and annoyances, and pretty much every office environment (even at the best companies) sucks. This is fairly mundane. What I think is unique about my approach is the insight that work is always a better antidote for work malaise than non-work. Just work. Just get something done.

People who fuck around on Facebook during the work day don’t have an active dislike for their jobs. They don’t want to “stick it to the man”. They don’t see what they’re doing as subversive or even wrong, because so many people do it. They just think they’re making their boredom go away, while they’re actually making it worse.

Some people, on the other hand, hate their jobs and employers, or just want to “break out”, or feel they have something better to do. Some people have good reasons to feel this way. There’s a solution, which is to look for another job, or to do a side project, or both. But there are some who take a different route, which is to build their side projects while on the job. They write the code and contact clients (sometimes using their corporate email “to be taken more seriously”) while they’re supposed to be doing paid work. This doesn’t always involve “hatred” of the existing employer; sometimes it’s just short-sighted greed and stupidity.

Again, I’m not saying “don’t do this” because I represent corporate stoogery or want to take some moral position. This is a practical issue. Some people get fired for this, but that’s a good outcome compared to what can happen, which is for the company to assert ownership over the work. I’ve seen a couple of people get personally burned for this, having to turn in side projects over which their companies asserted rights for no reason other than spite (the projects weren’t competing projects). They lost their jobs and the project work.

If you have a good idea for a side project at work, write the key insights down on a piece of paper and forget about them until you get home. If you must, do some reading and learning on the clock, but do not use company resources to build and do not try to send code from your work computer to your home machine. Just don’t. If you care about this side project, it’s worth buying your own equipment and getting up at 5:00 am.

3. Voicing inconsequential opinions. The first two “should be” obvious, despite the number of people who fall into those traps. This third one took me a while to learn. It’s not that voicing an opinion at work is bad. It’s good. However, it’s only good if that opinion will have some demonstrable career-improving effect, preferably by influencing a decision. A good (but not always accurate) first-order approximation is to only voice an opinion if there’s a decent chance that the suggestion may be acted upon. This doesn’t mean that it’s suicidal to voice an opinion when an alternate decision is made; it does mean you shouldn’t voice opinions if you know that it won’t have any effect on the decision.

No one ever became famous for preventing a plane crash, and no one ever got good marks for trying to prevent a plane crash and failing. There’s no, “I told you so” in the corporate world. Those who crashed the plane may be fired or demoted, but they won’t be replaced by the Cassandras who predicted the thing would happen. (If anything, they’re blamed for “sabotage” even if there’s no way they could have caused it.) Instead, they’ll be replaced by other cronies of the powerful people, and no one gets to be a crony by complaining.

This rule could be stated as “Don’t gossip”, but I think my formulation goes beyond that. Most of the communication that I’m advising against is not really “office gossip” because it’s socially harmless. Going to lunch and bashing a bad decision by upper management, in a large company, isn’t very likely to have any professional consequences. Upper management doesn’t care what support workers say about them at lunch. But this style of “venting” doesn’t actually make the venters feel better in the long run. People vent because they want to “feel heard” by people who can help them, but most venting that occurs in the workplace is from one non-decision-maker to another.

The problem with venting is that, in 2012, long (8+ years) job tenures are rare, but having one is still an effective way to get a leadership position in many organizations. If nothing else, a long job tenure on a resume suggests stability and success and can result in a leadership position if it doesn’t come from that career itself. Now, it can sometimes be advantageous to “job hop”, but most people would be better off getting their promotions in the same company if able to do so. Long job tenures look good. Short ones don’t. Now, there are good reasons to change jobs, even after a short tenure, but people should always be playing to have the long tenure as an option (even if they don’t plan on taking it). Why speed up the resentment clock?

Also, social intercourse that seems “harmless” may not be. I worked at a company that claimed to be extremely open to criticism and anti-corporate. There was also a huge “misc” mailing list largely dedicated to rants and venting about the (slow but visible) decline of the corporate culture. This was at a company with some serious flaws, but on the whole a good company even now; if you got the right project and boss, the big-company decline issues wouldn’t even concern you. In any case, this mailing list seemed inconsequential and harmless… until a higher-up informed me that showing up on the top-10 for that mailing list pretty much guaranteed ending up on a PIP (the humiliation process that precedes firing). This company had a 5% cutoff for PIPs, which is a pretty harsh bar in an elite firm, and a mailing-list presence pretty much guaranteed landing in the bucket.

Opinions and insights, even from non-decision-makers, are information. Information is power. Remember this.

4. Working long hours. This is the big one, and probably unexpected. The first 3? Most people figure them out after a few years. I doubt there are few people who are surprised by points 1, 2, and 3. So why do people keep making first-grade social mistakes at work? Because they sacrifice too fucking much. When you sacrifice too much, you care too much. When you care too much, you fail socially. When you fail socially, you fail at work (in most environments). And no one ever got out from under a bad performance review or (worse yet) a termination by saying, “But I worked 70 hours per week!”

The “analyst” programs in investment banking are notorious for long hours, and were probably at their worst in 2007 during the peak of the financial bubble. I asked a friend about his 110-hour weeks and how it affected him, and he gave me this explanation: “You don’t need to be brilliant or suave to do it, but you need to be intellectually and socially average– after a double all-nighter.” In other words, it was selection based on decline curve rather than peak capability.

Some of the best and strongest people have the worst decline curves. Creativity, mental illness, and sensitivity to sleep deprivation are all interlinked. When people start to overwork, the world starts to go fucking nuts. Absurd conflicts that make no sense become commonplace and self-perpetuating.

Unless there’s a clear career benefit to doing so, no one should put more than 40 hours per week into metered work. By “metered” work, I mean work that’s expected specifically in the employment context, under direction from a manager, typically (in most companies) with only a token (or sometimes no) interest in the employee’s career growth. And even 40 is high: I just use that number because it’s the standard. Working longer total hours is fine but only in the context of an obvious career goal: a side project, an above-normal mentorship arrangement, continued learning and just plain “keeping up” with technology changes. Self-directed career investment should get the surplus hours if you have the energy to work more than 40.

In general, leading the pack in metered work output isn’t beneficial from a career perspective. People don’t get promoted for doing assigned work at 150% volume, but for showing initiative, creativity, and high levels of skill. That requires a different kind of hard work that is more self-directed and that takes a long time to pay off. I don’t expected to get immediately promoted for reading a book about programming language theory or machine learning, but I do know that it will make me more capable of hitting the high notes in the future.

Historically, metered work expectations of professionals were about 20 to 25 hours per week. The other 20-25 hours were to be invested in career advancement, networking, and continuing education that would improve the professional’s skill set over time. Unfortunately, the professional world now seems to be expecting 40 hours of metered work, expecting employees to keep the “investment” work “on their own time”. This is suboptimal: it causes a lot of people to change jobs quickly. If you’re full to the brim on metered work, then you’re going to leave your job as soon as you stop learning new things from the metered work (that’s usually after 9 to 24 months). Google attempted to remedy this with “20% time”, but that has largely failed due to the complete authority managers have to destroy their subordinates in “Perf” (which also allows anonymous unsolicited peer review, an experiment in forcible teledildonics) for using 20% time. (Some Google employees enjoy 20%-time, but only with managerial blessing. Which means you have the perk if you have a nice manager… but if you have a good manager, you don’t need formal policies to protect you in the first place. So what good does the policy do?)

Worse yet, when people start working long hours because of social pressures, something subversive happens. People get huge senses of entitlement and start becoming monstrously unproductive. (After all, if you’re spending 12 hours in the office, what’s 15 minutes on Reddit? That 15 minutes becomes 30, then 120, then 300…) Thus, violations of items #1, #2, and #3 on this list become commonplace. People start spending 14 hours in the office and really working during 3 of them. That’s not good for anyone.

It would be easy to blame this degeneracy pattern on “bad managers”, like the archetypical boss who says, “If you don’t come in on Saturday, then don’t bother coming in on Sunday.” The reality, though, is that it doesn’t take a bad boss to get people working bad hours. Most managers actually know that working obscene hours is ineffective and unhealthy and don’t want to ask for that. Rather, people fall into that pattern whenever sacrifice replaces productivity as the performance measure, and I’ll note that peers are often as influential in assessment as managers. It’s often group pressure rather than managerial edict that leads to the ostracism of “pikers”. When people are in pain, all of this happens very quickly.

Then the “death march” mentality sets in. Fruitless gripes beget fruitless gripes (see #3) and morale plummets. Productivity declines, causing managerial attention, which often furthers the problem. People seek avoidance patterns and behavioral islands (see #1) that provide short-term relief to the environment that’s falling to pieces, but it does little good in the long-term. The smarter ones start looking to set up other opportunities (see #2) but if they get caught, get the “not a team player” label (a way of saying “I don’t like the guy” that sounds objective) and that’s basically nuts-up.

Unless there’s an immediate career benefit in doing so, you’re a chump if you load up on the metered work. You shouldn’t do “the minimum not to get fired” (that line is low, but don’t flirt with it; stay well north of that one). You should do more than that; enough metered work to fit in. Not less, not more. (Either direction of deviation will hurt you socially.) Even 40 hours, for metered work, is a very high commitment when you consider that the most interesting professions require 10-20 hours of unmetered work just to remain connected and current, but it’s the standard so it’s probably what most people should do. I wouldn’t say that it’s wise to go beyond it, and if you are going to make the above-normal sacrifice of a 45+ hour work week, do yourself a favor and sock some learning (and connections) away in unmetered work.

So yeah… don’t work long hours. And something about sunscreen.