“Job hopping” is often fast learning and shouldn’t be stigmatized

One of the traits of the business world that I’d love to see die, even if it were to take people with it, is the stigma against “job hopping”. I don’t see how people can not see that this is oppression, plain and simple. The stigma exists to deny employees of the one bit of leverage they have, which is to leave a job and get a better one.

The argument made in favor of the stigma is that (a) companies put a lot of effort into training people, and (b) very few employees earn back their salary in the first year. Let me address both of these. For the first, about the huge amount of effort put into training new hires, that’s not true. It may have been the case in 1987, but not anymore. There might be an orientation lasting a week or two, or an assigned mentor who sometimes does a great job and is sometimes absentee, but the idea that companies still invest significant resources into new hires as a general policy is outdated. They don’t. Sink or swim is “the new normal”. The companies that are exceptions to this will sometimes lose a good person, but they don’t have the systemic retention problems that leave them wringing their hands about “job hoppers”. For the second, this is true in some cases and not in others, but I would generally blame this (at least in technology) on the employer. If someone who is basically competent and diligent spends 6 months at a company and doesn’t contribute something– perhaps a time-saving script, a new system, or a design insight– that is worth that person’s total employment costs (salary, plus taxes, plus overhead) then there is something wrong with the corporate environment. Perhaps he’s been loaded up with fourth quadrant work of minimal importance. People should be able to start making useful contributions right away, and if they can’t, then the company needs to improve the feedback cycle. That will make everyone happier.

The claimed corporate perspective of “job hoppers” is that they’re a money leak, because they cost more than they are worth in the early years. However, that’s not true. I’d call it an out-and-out lie. Plenty of companies can pay young programmers market salaries and turn a profit. In fact, companies doing low-end work (which may still be profitable) often fire older programmers to replace them with young ones. What this means is that the fiction of new hires being worth less than a market salary holds no water. Actually, employing decent programmers (young or old, novice or expert) at market compensation is an enormous position of profit. (I’d call it an “arbitrage”, but having done real arbitrage I prefer to avoid this colloquialism.)

The first reason why companies don’t like “job hoppers” is not that new hires are incapable of doing useful work, but that companies intentionally prevent new people from doing useful work. The “dues paying” period is evaluative. The people who fare poorly or make their dislike of the low-end work obvious are the failures who are either fired or, more likely, given the indication that they won’t graduate to better things, which compels them to leave– but on the company’s terms. The dues-paying period leaks at the top. In actuality, it always did. It just leaks in a different way now. In the past, the smartest people would become impatient and bored with the low-yield, evaluative nonsense, just as they do now, but be less able to change companies. They’d lose motivation, and start to underperform, leaving the employer feeling comfortable with the loss. (“Not a team player; we didn’t want him anyway.”) In the “job hopping” era, they leave before they have the motivational crash and there is something to be missed.

The second problem that companies have with “job hoppers” is that they keep the market fluid and, additionally, transmit information. Job hoppers are the ones who tell their friends at the old company that a new startup is paying 30% better salaries and runs open allocation. They not only grab external promotions for themselves when they “hop”, but they learn and disseminate industry information that transfers power to engineers.

I’ve recently learned first-hand about the fear that companies have of talent leaks. For a few months last winter, I worked at a startup with crappy management, but excellent engineers, and I left when I was asked to commit perjury against some of my colleagues. (No, this wasn’t Google. First, Google is not a startup. Second, Google is a great company with outdated and ineffective HR but well-intended upper management. This, on the other hand, was a company with evil management.) There’s a lot of rumor surrounding what happened and, honestly, the story was so bizarre that even I am not sure what really went on. I won the good faith of the engineers by exposing unethical management practices, and became somewhat of a folk hero. I introduced a number of their best engineers to recruiters and helped them get out of that awful place. Then I moved on, or so I thought. Toward the end of 2012, I discovered that their Head of Marketing was working to destroy my reputation (I don’t know if he succeeded, but I’ve seen the attempts) inside that company by generating a bunch of spammy Internet activity, and attempting to make it look like it was me doing it. He wanted to make damn sure I couldn’t continue the talent bleed, even though my only interaction was to introduce a few people (who already wanted to leave) to recruiters. These are the extents to which a crappy company will go to plug a talent hole (when those efforts would be better spent fixing the company).

Finally, “job hopping” is a slight to a manager’s ego. Bosses like to dump on their own terms. After a few experiences with the “It’s not you, it’s me” talk, after which the reports often go to better jobs than the boss’s, managers develop a general distaste for these “job hoppers”.

These are the real reasons why there is so much dislike for people who leave jobs. The “job hopper” isn’t stealing from the company. If a company can employ a highly talented technical person for 6 months and not profit from the person’s work, the company is stealing from itself.

All this said, I wouldn’t be a fan of someone who joined companies with the intention of “hopping”, but I think very few people intend to run their careers that way. I have the resume of a “job hopper”, but when I take a job, I have no idea whether I’ll be there for 8 months or 8 years. I’d prefer the 8 years, to be honest. I’m sick of having to change employers every year, but I’m not one to suffer stagnation either.

My observation has been that most “job hoppers” are people who learn rapidly, and become competent at their jobs quickly. In fact, they enter jobs with a pre-existing and often uncommon skill set. Most of the job hoppers I know would be profitable to hire as consultants at twice their salary. Because they’re smart, they learn fast and quickly outgrow the roles that their companies expect them to fill. They’re ready to move beyond the years-long dues-paying period at two months, but often can’t. Because they leave once they hit this political wall, they become extremely competent.

The idea that the “job hopper” is an archetype of Millennial “entitlement” is one I find ridiculous. Actually, we should blame this epidemic of job hopping on efficient education. How so? Fifty years ago, education was much more uniform and, for the smartest people, a lot slower than it is today. Honors courses were rare, and for gifted students to be given extra challenges was uncommon. This was true within as well as between schools. Ivy League mathematics majors would encounter calculus around the third year of college, and the subjects that are now undergraduate staples (real analysis, abstract algebra) were solidly graduate-level. There were a few high-profile exceptions who could start college at age 14 but, for most people, being smart didn’t result in a faster track. You progressed at the same pace as everyone else. This broke down for two reasons. First, smart people get bored on the pokey track, and in a world that’s increasingly full of distractions, that boredom becomes crippling. Second, the frontiers of disciplines like mathematics are now so far out and specialized that society can’t afford to have the smartest people dicking around at quarter-speed until graduate school.

So we now have a world with honors and AP courses, and with the best students taking real college courses by the time they’re in high school. College is even more open. A freshman who is intellectually qualified to take graduate-level courses can do it. That’s not seen as a sign of “entitlement”. It’s encouraged.

This is the opposite of the corporate system, which has failed to keep up with modernity. A high-potential hire who outgrows starter projects and low-yield, dues-paying grunt work after 3 months does not get to skip the typical 1-2 years of it just because she’s not learning anything from it. People who make that move, either explicitly by expressing their boredom, or simply by losing motivation and grinding to a halt, often end up getting fired. You don’t get to skip grades in the corporate world.

Unfortunately, companies can’t easily promote people fast, because there are political problems. Rapid promotion, even of a person whose skill and quick learning merit it, becomes a morale problem. Companies additionally have a stronger need to emphasize “the team” (as in, “team player”, as much as I hate that phrase) than schools. In school, cheating is well-defined and uncommon, so individualism works. At work, where the ethical rules are often undefined, group cohesion is often prioritized over individual morale, as individualism is viewed as dangerous. This makes rapid promotion of high-potential people such a political liability that most companies don’t even want to get involved. Job hoppers are the people who rely on external promotion because they often grow faster than it is politically feasible for a typical corporation to advance them.

For all that is said to the negative about job hoppers, I know few who intentionally wish to “hop”. Most people will stay with a job for 5 years, if they continue to grow at a reasonable pace. The reason they move around so much is that they rarely do. So is it worth it to hire job hoppers, given the flight risk associated with top talent? I would say, without hesitation, “yes”. Their average tenures “of record” are short, but they tend to be the high-power contributors who get a lot done in a short period of time. Also, given that one might become a long-term employee if treated well, delivering both top talent and longevity, I’d say there’s a serious call option here.

“Job hopping” shouldn’t be stigmatized because it’s the corporate system that’s broken. Most corporate denizens spend most of their time on low-yield make-work that isn’t important, but largely exists because of managerial problems or is evaluative in purpose. The smart people who figure out quickly that they’re wasting their time tend to want to move on to something better. Closed-allocation companies make this extremely difficult and as politically rickety as a promotion system, so often they decide to move on. Without the “job hopping” stigma, they’d be able to quit and leave when this happens, but that reputation risk encourages them, instead, to quit and stay. For companies, this is much worse.

 

Learning C, reducing fear.

I have a confession to make. At one point in my career, I was a mediocre programmer. I might say that I still am, only in the context of being a harsh grader. I developed a scale for software engineering for which I can only, in intellectual honesty, assign myself 1.8 points out of a possible 3.0. One of the signs of my mediocrity is that I haven’t a clue about many low-level programming details that, thirty years ago, people dealt with on a regular basis. I know what L1 and L2 cache are, but I haven’t built the skill set yet to make use of this knowledge.

I love high-level languages like Scala, Clojure, and Haskell. The abstractions they provide make programming more productive and fun than it is in a language like Java and C++, and the languages have a beauty that I appreciate as a designer and mathematician. Yet, there is still quite a place for C in this world. Last July, I wrote an essay, “Six Languages to Master“, in which I advised young programmers to learn the following languages:

  • Python, because one can get started quickly and Python is a good all-purpose language.
  • C, because there are large sections of computer science that are inaccessible if you don’t understand low-level details like memory management.
  • ML, to learn taste in a simple language often described as a “functional C” that also teaches how to use type systems to make powerful guarantees about programs.
  • Clojure, because learning about language (which is important if one wants to design good interfaces) is best done with a Lisp and because, for better for worse, the Java libraries are a part of our world.
  • Scala, because it’s badass if used by people with a deep understanding of type systems, functional programming, and the few (very rare) occasions where object-oriented programming is appropriate. (It can be, however, horrid if wielded by “Java-in-Scala” programmers.)
  • English (or the natural language of one’s environment) because if you can’t teach other people how to use the assets you create, you’re not doing a very good job.

Of these, C was my weakest at the time. It still is. Now, I’m taking some time to learn it. Why? There are two reasons for this.

  • Transferability. Scala’s great, but I have no idea if it will be around in 10 years. If the Java-in-Scala crowd adopts the language without upgrading its skills and the language becomes associated with Maven, XMHell, IDE culture, and commodity programmers, in the way that Java has, the result will be piles of terrible Scala code that will brand the language as “write-only” and damage its reputation for reasons that are not Scala’s fault. These sociological variables I cannot predict. I do, however, know that C will be in use in 10 years. I don’t mind learning new languages– it’s fun and I can do it quickly– but the upshot of C is that, if I know it, I will be able to make immediate technical contributions in almost any programming environment. I’m already fluent in about ten languages; might as well add C. 
  • Confidence. High-level languages are great, but if you develop the attitude that low-level languages are “unsafe”, ugly, and generally terrifying, then you’re hobbling yourself for no reason. C has its warts, and there are many applications where it’s not appropriate. It requires attention to details (array bounds, memory management) that high-level languages handle automatically. The issue is that, in engineering, anything can break down, and you may be required to solve problems in the depths of detail. Your beautiful Clojure program might have a performance problem in production because of an issue with the JVM. You might need to dig deep and figure it out. That doesn’t mean you shouldn’t use Clojure. However, if you’re scared of C, you can’t study the JVM internals or performance considerations, because a lot of the core concepts (e.g. memory allocation) become a “black box”. Nor will you be able to understand your operating system.

For me, personally, the confidence issue is the important one. In the functional programming community, we often develop an attitude that the imperative way of doing things is ugly, unsafe, wrong, and best left to “experts only” (which is ironic, because most of us are well into the top 5% of programmers, and more equipped to handle complexity than most; it’s this adeptness that makes us aware of our own limitations and prefer functional safeguards when possible). Or, I should not say that this is a prevailing attitude, so much as an artifact of communication. Fifty-year-old, brilliant functional programmers talk about how great it is to be liberated from evils like malloc and free. They’re right, for applications where high-level programming is appropriate. The context being missed is that they have already learned about memory management quite thoroughly, and now it’s an annoyance to them to keep having to do it. That’s why they love languages like Ocaml and Python. It’s not that low-level languages are dirty or unsafe or even “un-fun”, but that high-level languages are just much better suited to certain classes of problems.

Becoming the mentor

I’m going to make an aside that has nothing to do with C. What is the best predictor of whether someone will remain at a company for more than 3 years? Mentorship. Everyone wants “a Mentor” who will take care of his career by providing interesting work, freedom from politics, necessary introductions, and well-designed learning exercises instead of just-get-it-done grunt work. That’s what we see in the movies: the plucky 25-year-old is picked up by the “star” trader, journalist, or executive and, over 97 long minutes, his or her career is made. Often this relationship goes horribly wrong in film, as in Wall Street, wherein the mentor and protege end up in a nasty conflict. I won’t go so far as to call this entirely fictional, but it’s very rare. You can find mentors (plural) who will help you along as much as they can, and should always be looking for people interested in sharing knowledge and help, but you shouldn’t look for “The Mentor”. He doesn’t exist. People want to help those who are already self-mentoring. This is even more true in a world where few people stay at a job for more than 4 years.

I’ll turn 30 this year, and in Silicon Valley that would entitle me to a lawn and the right to tell people to get off of it, but I live in Manhattan so I’ll have to keep using the Internet as my virtual lawn. (Well, people just keep fucking being wrong. There are too many for one man to handle!) One of the most important lessons to learn is the importance of self-mentoring. Once you get out of school where people are paid to teach you stuff, people won’t help people who aren’t helping themselves. To a large degree, this means becoming the “Mentor” figure that one seeks. I think that’s what adulthood is. It’s when you realize that the age in which there were superior people at your beck and call to sort out your messes and tell you what to do is over. Children can be nasty to each other but there are always adults to make things right– to discipline those who break others’ toys, and replace what is broken. The terrifying thing about adulthood is the realization that there are no adults. This is a deep-seated need that the physical world won’t fill. There’s at least 10,000 recorded years of history that shows people gaining immense power by making “adults-over-adults” up, and using the purported existence of such creatures to arrogate political power, because most people are frankly terrified of the fact that, at least in the observable physical world and in this life, there is no such creature.

What could this have to do with C? Well, now I dive back into confessional mode. My longest job tenure (30 months!) was at a startup that seems to have disappeared after I left. I was working in Clojure, doing some beautiful technical work. This was in Clojure’s infancy, but the great thing about Lisps is that it’s easy to adapt the language to your needs. I wrote a multi-threaded debugger using dynamic binding (dangerous in production, but fine for debugging) that involved getting into the guts of Clojure, a test harness, an RPC client-server infrastructure, and a custom NoSQL graph-based database. The startup itself wasn’t well-managed, but the technical work itself was a lot of fun. Still, I remember a lot of conversations to the effect of, “When we get a real <X>”, where X might be “database guy” or “security expert” or “support team”. The attitude I allowed myself to fall into, when we were four people strong, was that a lot of the hard work would have to be done by someone more senior, someone better. We inaccurately believed that the scaling challenges would mandate this, when in fact, we didn’t scale at all because the startup didn’t launch.

Business idiots love real X’s. This is why startups frequently develop the social-climbing mentality (in the name of “scaling”) that makes internal promotion rare. The problem is that this “realness” is total fiction. People don’t graduate from Expert School and become experts. They build a knowledge base over time, often by going far outside of their comfort zones and trying things at which they might fail, and the only things that change are that the challenges get harder, or the failure rate goes down. As with the Mentor that many people wait for in vain, one doesn’t wait to “find a Real X” but becomes one. That’s the difference between a corporate developer and a real hacker. The former plays Minesweeper (or whatever Windows users do these days) and waits for an Expert to come from on high to fix his IDE when it breaks. The latter shows an actual interest in how computers really work, which requires diving into the netherworld of the command line interface.

That’s why I’m learning C. I’d prefer to spend much of my programming existence in high-level languages and not micromanaging details– although, this far, C has proven surprisingly fun– but I realize that these low-level concerns are extremely important and that if I want to understand things truly, I need a basic fluency in them. If you fear details, you don’t understand “the big picture”. The big picture is made up of details, after all. This is a way to keep the senescence of business FUD at bay– to not become That Executive who mandates hideous “best practices” Java, because Python and Scala are “too risky”.

Fear of databases? Of operating systems? Of “weird” languages like C and Assembly? Y’all fears get Zero Fucks from me.

A humorous note about creationism and snakes.

This isn’t one of my deeper posts. It’s just something I find amusing regarding a cultural symbol, especially in the context of Biblical creationism. One of the core stories of the Bible is the temptation of Eve by a serpent who brought her to disobey God. In other words, sin came into the world because of a snake. The Garden of Eden wasn’t perfect, because one animal was bad and woman was weak. This myth’s origins go back to Sumer, but that’s irrelevant to this observation. The question is: why a snake? Why was this animal, out of all of dangerous creatures out there, chosen as the symbol of sin?

Snakes are carnivores, but most of the charismatic megafauna, such as tigers, eagles, and wolves, are. Yet few of those seem to inspire the reflexive fear that snakes do. Many of these animals are more dangerous to us than snakes. Yet we view lions and hawks with awe, not disgust or dread.

The most likely answer is not what creationists would prefer: it’s evolution that leads us to view snakes in such a way. Most land mammals– even large ones, to whom most species of snake are harmless– seem to have some degree of fear of snakes, and humans are no exception. Most religions have a strong view of this animal– some positive and reverent, but many negative. Why? Hundreds of millions of years ago, when our mammalian ancestors were mostly rodent-like in size, snakes were their primary predators. A fear of swift, legless reptiles was an evolutionary advantage. Seeing one meant you were about to die.

We don’t have this fear of lions or tigers because such creatures aren’t that old. Large cats have only been with us for a few million years, during which time we were also large and predatory, so there’s a mutual respect between us. Snakes and mammals, on the other hand, go way back.

Related to this is the legend of the dragon. No one can prove this, obviously, but the concept of a dragon seems to have emerged out of our “collective unconscious” as mammals. We have to go back 65 million years to find creatures that were anything like dragons, but a large number of cultures have independently invented such a mythical creature: a cocktail of small mammalian terrors (reptiles, raptors, fire, venom) coming from a time when we were small and probably defenseless prey creatures.

The key to understanding long-standing myths and symbols such as Biblical creation turns out, with some irony in the fact, to be evolution. Serpents ended up in our creation myths, because after all this time, we haven’t gotten over what they did to us 100 million years ago.

IDE Culture vs. Unix philosophy

Even more of a hot topic than programming languages is the interactive development environment, or IDE. Personally, I’m not a huge fan of IDEs. As tools, standing alone, I have no problem with them. I’m a software libertarian: do whatever you want, as long as you don’t interfere with my work. However, here are some of the negatives that I’ve observed when IDEs become commonplace or required in a development environment:

  • the “four-wheel drive problem”. This refers to the fact that an unskilled off-road driver, with four-wheel drive, will still get stuck. The more dexterous vehicle will simply have him fail in a more inaccessible place. IDEs pay off when you have to maintain an otherwise unmanageable ball of other people’s terrible code. They make unusable code merely miserable. I don’t think there’s any controversy about this. The problem is that, by providing this power, then enable an activity of dubious value: continual development despite abysmal code quality, when improving or killing the bad code should be a code-red priority. IDEs can delay code-quality problems and defer macroscopic business effects, which is good for manageosaurs who like tight deadlines, but only makes the problem worse at the end stage. 
  • IDE-dependence. Coding practices that require developers to depend on a specific environment are unforgivable. This is true whether the environment is emacs, vi, or Eclipse. The problem with IDEs is that they’re more likely to push people toward doing things in a way that makes use of a different environment impossible. One pernicious example of this is in Java culture’s mutilation of the command-line way of doing things with singleton directories called “src” and “com”, but there are many that are deeper than that. Worse yet, IDEs enable the employment of programmers who don’t even know what build systems or even version control are. Those are things “some smart guy” worries about so the commodity programmer can crank out classes at his boss’s request.
  • spaghettification. I am a major supporter of the read-only IDE, preferably served over the web. I think that code navigation is necessary for anyone who needs to read code, whether it’s crappy corporate code or the best-in-class stuff we actually enjoy reading. When you see a name, you should be able to click on it and see where that name is defined. However, I’m pretty sure that, on balance, automated refactorings are a bad thing. Over time, the abstractions which can easily be “injected” into code using an IDE turn it into “everything is everywhere” spaghetti code. Without an IDE, the only way to do such work is to write a script to do it. There are two effects this has on the development process. One is that it takes time to make the change: maybe 30 minutes. That’s fine, because the conversation that should happen before a change that will affect everyone’s work should take longer than that. The second is that only adept programmers (who understand concepts like scripts and the command line) will be able to do it. That’s a good thing.
  • time spent keeping up the environment. Once a company decides on “One Environment” for development, usually an IDE with various in-house customizations, that IDE begins to accumulate plugins of varying quality. That environment usually has to be kept up, and that generates a lot of crappy work that nobody wants to do.

This is just a start on what’s wrong with IDE culture, but the core point is that it creates some bad code. So, I think I should make it clear that I don’t dislike IDEs. They’re tools that are sometimes useful. If you use an IDE but write good code, I have no problem with you. I can’t stand IDE culture, though, because I hate hate hate hate hate hate hate hate the bad code that it generates.

In my experience, software environments that rely heavily on IDEs tend to be those that produce terrible spaghetti code, “everything is everywhere” object-oriented messes, and other monstrosities that simply could not be written by a sole idiot. He had help. Automated refactorings that injected pointless abstractions? Despondency infarction frameworks? Despise patterns? Those are likely culprits.

In other news, I’m taking some time to learn C at a deeper level, because as I get more into machine learning, I’m realizing the importance of being able to reason about performance, which requires a full-stack knowledge of computing. Basic fluency in C, at a minimum, is requisite. I’m working through Zed Shaw’s Learn C the Hard Way, and he’s got some brilliant insights not only about C (on which I can’t evaluate whether his insights are brilliant) but about programming itself. In his preamble chapter, he makes a valid insight in his warning not to use an IDE for the learning process:

An IDE, or “Integrated Development Environment” will turn you stupid. They are the worst tools if you want to be a good programmer because they hide what’s going on from you, and your job is to know what’s going on. They are useful if you’re trying to get something done and the platform is designed around a particular IDE, but for learning to code C (and many other languages) they are pointless. […]
Sure, you can code pretty quickly, but you can only code in that one language on that one platform. This is why companies love selling them to you. They know you’re lazy, and since it only works on their platform they’ve got you locked in because you are lazy. The way you break the cycle is you suck it up and finally learn to code without an IDE. A plain editor, or a programmer’s editor like Vim or Emacs, makes you work with the code. It’s a little harder, but the end result is you can work with any code, on any computer, in any language, and you know what’s going on. (Emphasis mine.)

I disagree with him that IDEs will “turn you stupid”. Reliance on one prevents a programmer from ever turning smart, but I don’t see how such a tool would cause a degradation of a software engineer’s ability. Corporate coding (lots of maintenance work, low productivity, half the day lost to meetings, difficulty getting permission to do anything interesting, bad source code) does erode a person’s skills over time, but that can’t be blamed on the IDE itself. However, I think he makes a strong point. Most of the ardent IDE users are the one-language, one-environment commodity programmers who never improve, because they never learn what’s actually going on. Such people are terrible for software, and they should all either improve, or be fired.

The problem with IDEs is that each corporate development culture customizes the environment, to the point that the cushy, easy coding environment can’t be replicated at home. For someone like me, who doesn’t even like that type of environment, that’s no problem because I don’t need that shit in order to program. But someone steeped in cargo cult programming because he started in the wrong place is going to falsely assume that programming requires an IDE, having seen little else, and such novice programmers generally lack the skills necessary to set one up to look like the familiar corporate environment. Instead, he needs to start where every great programmer must learn some basic skills: at the command-line. Otherwise, you get a “programmer” who can’t program outside of a specific corporate context– in other words, a “5:01 developer” not by choice, but by a false understanding of what programming really is.

The worst thing about these superficially enriched corporate environments is their lack of documentation. With Unix and the command-line tools, there are man pages and how-to guides all over the Internet. This creates a culture of solving one’s own problems. Given enough time, you can answer your own questions. That’s where most of the growth happens: you don’t know how something works, you Google an error message, and you get a result. Most of the information coming back in indecipherable to a novice programmer, but with enough searching, the problem is solved, and a few things are learned, including answers to some questions that the novice didn’t yet have the insight (“unknown unknowns”) yet to ask. That knowledge isn’t built in a day, but it’s deep. That process doesn’t exist in an over-complex corporate environment, where the only way to move forward is to go and bug someone, and the time cost of any real learning process is at a level that most managers would consider unacceptable.

On this, I’ll crib from Zed Shaw yet again, in Chapter 3 of Learn C the Hard Way:

In the Extra Credit section of each exercise I may have you go find information on your own and figure things out. This is an important part of being a self-sufficient programmer. If you constantly run to ask someone a question before trying to figure it out first then you never learn to solve problems independently. This leads to you never building confidence in your skills and always needing someone else around to do your work. The way you break this habit is to force yourself to try to answer your own questions first, and to confirm that your answer is right. You do this by trying to break things, experimenting with your possible answer, and doing your own research. (Emphasis mine.)

What Zed is describing here is the learning process that never occurs in the corporate environment, and the lack of it is one of the main reasons why corporate software engineers never improve. In the corporate world, you never find out why the build system is set up in the way that it is. You just go bug the person responsible for it. “My shit depends on your shit, so fix your shit so I can run my shit and my boss doesn’t give me shit over my shit not working for shit.” Corporate development often has to be this way, because learning a typical company’s incoherent in-house systems doesn’t provide a general education. When you’re studying the guts of Linux, you’re learning how a best-in-class product was built. There’s real learning in mucking about in small details. For a typically mediocre corporate environment that was built by engineers trying to appease their managers, one day at a time, the quality of the pieces is often so shoddy that not much is learned in truly comprehending them. It’s just a waste of time to deeply learn such systems. Instead, it’s best to get in, answer your question, and get out. Bugging someone is the most efficient and best way to solve the problem.

It should be clear that what I’m railing against is the commodity developer phenomenon. I wrote about “Java Shop Politics” last April, which covers a similar topic. I’m proud of that essay, but I was wrong to single out Java as opposed to, e.g. C#, VB, or even C++. Actually, I think any company that calls itself an “<X> Shop” for any language X is missing the point. The real evil isn’t Java the language, as limited as it may be, but Big Software and the culture thereof. The true enemy is the commodity developer culture, empowered by the modern bastardization of “object-oriented programming” that looks nothing like Alan Kay’s original vision.

In well-run software companies, programs are build to solve problems, and once the problem is finished, it’s Done. The program might be adapted in the future, and may require maintenance, but that’s not an assumption. There aren’t discussions about how much “headcount” to dedicate to ongoing maintenance after completion, because that would make no sense. If people need to modify or fix the program, they’ll do it. Programs solve well-defined problems, and then their authors move on to other things– no God Programs that accumulate requirements, but simple programs designed to do one thing and do it well. The programmer-to-program relationship must be one-to-many. Programmers write programs that do well-defined, comprehensible things well. They solve problems. Then they move on. This is a great way to build software “as needed”, and the only problem with this style of development is that the importance of small programs is hard to micromanage, so managerial dinosaurs who want to track efforts and “headcount” don’t like it much, because they can never figure out who to scream at when things don’t go their way. It’s hard to commoditize programmers when their individual contributions can only be tracked by their direct clients, and when people can silently be doing work of high importance (such as making small improvements to the efficiencies of core algorithms that reduce server costs). The alternative is to invert the programmer-to-program relationship: make it many-to-one. Then you have multiple programmers (now a commodity) working on Giant Programs that Do Everything. This is a terrible way to build software, but it’s also the one historically favored by IDE culture, because the sheer work of setting up a corporate development environment is enough that it can’t be done too often, and this leads managers to desire Giant Projects and a uniformity (such as a one-language policy, see again why “<X> Shops” suck) that managers like but that often makes no sense.

The right way of doing things– one programmer works on many small, self-contained programs– is the core of the so-called “Unix philosophy“. Big Programs, by contrast, invariably have undocumented communication protocols and consistency requirements whose violation leads not only to bugs, but to pernicious misunderstandings that muddle the original conceptual integrity of the system, resulting in spaghetti code and “mudballs”. The antidote is for single programs themselves to be small, for large problems to be solved with systems that are given the respect (such as attention to fault tolerance) that, as such, they deserve.

Are there successful exceptions to the Unix philosophy? Yes, there are, but they’re rare. One notable example is the database, because these systems often have very strong requirements (transactions, performance,  concurrency, durability, fault-tolerance) that cannot be as easily solved with small programs and organic growth alone. Some degree of top-down orchestration is required if you’re going to have a viable database, because databases have a lot of requirements that aren’t typical business cruft, but are actually critically important. Postgres, probably the best SQL out there, is not a simple beast. Indeed, databases violate one of the core tenets of the Unix philosophy– store data in plain text– and they do so for good reasons (storage usage). Databases also mandate that people be able to use them without having to keep up with the evolution of such a system’s opaque and highly-optimized internal details, which makes the separation of implementation from interface (something that object-oriented programming got right) a necessary virtue. Database connections, like file handles, should be objects (where “object” means “something that can be used with incomplete knowledge of its internals”.) So databases, in some ways, violate the Unix philosophy, and yet are still used by staunch adherents. (We admit that we’re wrong sometimes.) I will also remark that it has taken decades for some extremely intelligent (and very well-compensated) people to get databases right. Big Projects win when no small project or loose federation thereof will do the job.

My personal belief is that almost every software manager thinks he’s overseeing one of the exceptions: a Big System that (like Postgres) will grow to such importance that people will just swallow the complexity and use the thing, because it’s something that will one day be more important than Postgres or the Linux kernel. In almost all cases, they are wrong. Corporate software is an elephant graveyard of such over-ambitious systems. Exceptions to the Unix philosophy are extremely rare. Your ambitious corporate system is almost certainly not one of them. Furthermore, if most of your developers– or even a solid quarter of them– are commodity developers who can’t code outside of an IDE, you haven’t a chance.

We should pay people not to subordinate

In the very long term, technological society will need to implement a basic income, as soon as full employment becomes untenable. Basic income (BI) is an income paid to all people, with no conditions. Alaska already has a small one, derived from its oil wealth. In the long term, however, full employment will be impossible due to the need for ongoing, intensive, and traditionally unpaid training.

Today, I’m not going to talk about basic income, because we’re probably a couple of decades before society absolutely needs one, and even farther away from one being implemented, because of the monumental political hurdles such an effort would encounter. Instead, I’m going to talk about right now– January 7, 2013– and something we need to do in order to maintain our capacity to innovate. I will address something that society ought to do in order to prevent a pointless and extreme destruction of human capital.

Peter Thiel has created a program (“20 Under 20″) that pays high-potential young people to skip college, but the entry-level grunt work most people spend the first few years of their careers on is, in my opinion, much more damaging, especially given its indefinite duration. (I don’t think undergraduate college is that damaging at all, but that’s another debate.) There is some busywork in college, and there are a few (but they’re very rare) incompetent professors, but more creativity is lost during the typical workplace’s years-long dues-paying period, which habituates people to subordination, than to any educational program. I do not intend to say that there aren’t problems with schools, but the institutions for which the schools prepare people are worse. At least grading in school is fair. A professor as corrupt and partial in grading as the typical corporate manager would be fired– and professors don’t get fired often.

In terms of expected value (that is, the average performance one would observe given an indefinite number of attempts) the market rewards creativity, which is insubordinate. However, when it comes to personal income, expectancy is completely meaningless, at least for us poors who need a month-to-month income to pay rent. Most people would rather have a guaranteed $100,000 per year than a 1-in-1000 shot (every year) at $500 million, with a 99.9% chance of no income, even though the latter deal has more expectancy in it. Risk-adjusted, people of average means are rewarded for taking stable jobs, which often require subordination.

Technically speaking, people are paid for work, not subordination, but the process that exists to evaluate the work is so corrupt and rife with abuse that it devolves into a game that requires subordination. For a thought experiment, consider what would happen to a typical officer worker who, without subversion or deception to hide her priorities, did the following:

  • worked on projects she considers most important, regardless of her manager’s priorities,
  • prioritized her long-term career growth over short-term assignments, and
  • expressed high-yield, creative ideas regardless of their political ramifications.

These activities are good for society, because she becomes better at her job, and obviously for her. They’re even good for her company. However, this course of action is likely to get her fired. Certainly, there’s enough risk of that to invalidate the major benefit of being an employee, which is stability.

So, in truth, society pays people to be subordinate, and that’s a real problem. In theory, capitalist society pays for valuable work, but the people trusted to evaluate the work inevitably become a source of corruption as they demand personal loyalty (which is rarely repaid in kind) rather than productivity itself. However, the long-term effect of subordination is to cause creative atrophy. To quote Paul Graham, in “You Weren’t Meant to Have a Boss“:

If you’re not allowed to implement new ideas, you stop having them. And vice versa: when you can do whatever you want, you have more ideas about what to do. So working for yourself makes your brain more powerful in the same way a low-restriction exhaust system makes an engine more powerful.

I would take this even farther. I believe that, after a certain age and constellation of conditions, creativity can be lost effectively forever. People who keep their creativity up don’t lose it– and lifelong creative people seem to peak in their 50s or later, which should kill the notion that it’s a property of the young only– but people who fall into the typical corporate slog develop a mindset and conditioning that render them irreversibly dismal. It only seems to take a few years for this to happen. Protecting one’s creativity practically demands insubordination, making it almost impossible to win the corporate ladder and remain creative. This should explain quite clearly the lack of decent leadership our society exhibits.

We should offset this by finding a way to reward people for not subordinating. To make it clear, I’m not saying we should pay people not to work. In fact, that’s a terrible idea. Instead, we should find a repeatable, robust, and eventually universal way to reward people who work in non-subordinate, creative ways, thereby rewarding the skills that our society actually needs, instead of the mindless subordination that complacent corporations have come to expect. By doing this, we can forestall the silent but catastrophic loss that is the wholesale destruction of human creative capital.

No, idiot. Discomfort Is Bad.

Most corporate organizations have failed to adapt to the convexity of creative and technological work, a result of which is that the difference between excellence and mediocrity is much more meaningful than that between mediocrity and zero. An excellent worker might produce 10 times as much value as a mediocre one, instead of 1.2 times as much, as was the case in the previous industrial era. Companies, trapped in concave-era thinking, still obsess over “underperformers” (through annual witch hunts designed to root out the “slackers”) while ignoring the much greater danger, which is the risk of having no excellence. That’s much more deadly. For example, try to build a team of 50th- to 75th-percentile software engineers to solve a hard problem, and the team will fail. You don’t have any slackers or useless people– all would be perfectly productive people, given decent leadership– but you also don’t have anyone with the capability to lead, or to make architectural decisions. You’re screwed.

The systematic search-and-destroy attitude that many companies take toward “underperformers” exists for a number of reasons, but one is to create pervasive discomfort. Performance evaluation is a subjective, noisy, information-impoverished process, which means that good employees can get screwed just by being unlucky. The idea behind these systems is to make sure that no one feels safe. One in 10 people gets put through the kangaroo court of a “performance improvement plan” (which exists to justify termination without severance) and fired if he doesn’t get the hint. Four in 10 get damaging, below-average reviews that damage the relationship with the boss, but make internal mobility next to impossible. Four more are tagged with the label of mediocrity, and, finally, one of those 10 gets a good review and a “performance-based” bonus… which is probably less than he feels he deserved, because he had to play mad politics to get it. Everyone’s unhappy, and no one is comfortable. That is, in fact, the point of such systems: to keep people in a state of discomfort.

The root idea here is that Comfort Is Bad. The idea is that if people feel comfortable at work, they’ll become complacent, but that if they’re intimidated just enough, they’ll become hard workers. In the short term, there’s some evidence that this sort of motivation works. People will stay at work for an additional two hours in order to avoid missing a deadline and having an annoying conversation the next day. In the long term, it fails. For example, open-plan offices, designed to use social discomfort to enhance productivity, actually reduce it by 66 percent. Hammer on someone’s adrenal system, and you get response for a short while. After a certain point, you get a state of exhaustion and “flatness of affect”. The person doesn’t care anymore.

What’s the reason for this? I think that the phenomenon of learned helplessness is at play. One short-term reliable way to get an animal such as a human to do something is to inflict discomfort, and to have the discomfort go away if the desired work is performed. This is known as negative reinforcement; the removal of unpleasant circumstances in exchange for desired behavior. An example of this known to all programmers is the dreaded impromptu status check: the pointless unscheduled meeting in which a manager drops in, unannounced, and asks for an update on work progress,usually  in the absence of an immediate need. Often, this isn’t malicious or intentionally annoying, but comes from a misunderstanding of how engineers work. Managers are used to email clients that can be checked 79 times per day with no degradation of performance, and tend to forget that humans are not this way. That said, the behavior is an extreme productivity-killer, as it costs about 90 minutes per status check. I’ve seen managers do this 2 to 4 times per day. The more shortfall in the schedule, the more grilling there is. The idea is to make the engineer work hard so there is progress to report and the manager goes away quickly. Get something done in the next 24 hours, or else. This might have that effect– for a few weeks. At some point, though, people realize that the discomfort won’t go away in the long term. In fact, it gets worse, because performing well leads to higher expectations, while a decline in productivity (or even a perceived decline) brings on more micromanagement. Then learned helplessness sets in, and the attitude of not giving a shit takes hold. This is why, in the long run, micromanagers can’t motivate shit to stink.

Software engineers are increasingly inured to environments of discomfort and distraction. One of the worst trends in the software industry is the tendency toward cramped, open-plan offices where an engineer might have less than 50 square feet of personal space. This is sometimes attributed to cost savings, but I don’t buy it. Even in Midtown Manhattan, office space only costs about $100 per square foot per year. That’s not cheap, but not expensive enough (for software engineers) to justify the productivity-killing effect of the open-plan office.

Discomfort is an especial issue for software engineers, because our job is to solve problems. That’s what we do: we solve other peoples’ problems, and we solve our own. Our job, in large part, is to become better at our job. If a task is menial, we don’t suffer through it, nor do we complain about it or attempt to delegate it to someone else. We automate it away. We’re constantly trying to improve our productivity. Cramped workspaces, managerial status checks, and corrupt project-allocation machinery (as opposed to open allocation) all exist to lower the worker’s social status and create discomfort or, as douchebags prefer to call it, “hunger”. This is an intended effect, and because it’s in place on purpose, it’s also defended by powerful people. When engineers learn this, they realize that they’re confronted with a situation they cannot improve. It becomes a morale issue.

Transient discomfort motivates people to do things. If it’s cold, one puts on a coat. When discomfort recurs without fail, it stops having this effect. At some point, a person’s motivation collapses. What use is it to act to reduce discomfort if the people in charge of the environment will simply recalibrate it to make it uncomfortable again? None. So what motivates people in the long term? See: What Programmers Want. People need a genuine sense of accomplishment that comes from doing something well. That’s the genuine, long-lasting motivation that keeps people working. Typically, the creative and technological accomplishments that revitalize a person and make long-term stamina possible will only occur in an environment of moderate comfort, in which ideas flow freely. I’m not saying that the office should become an opium den, and there are forms of comfort that are best left at home, but people need to feel secure and at ease with the environment– not like they’re in a warzone.

So why does the Discomfort Is Good regime live on? Much of it is just an antiquated managerial ideology that’s poorly suited to convex work. However, I think that another contributing factor is “manager time”. One might think, based on my writing, that I dislike managers. As individuals, many of them are fine. It’s what they have to do that I tend to dislike, but it’s not an enviable job. Managing has higher status but, in reality, is no more fun than being managed. Managers are swamped. With 15 reports, schedules full of meetings, and their own bosses to “manage up”, they are typically overburdened. Consequently, a manager can’t afford to dedicate more than about 1/20 of his working time to any one report. The result of this extreme concurrency (out of accord with how humans think) is that each worker is split into a storyline that only gets 5% of the manager’s time. So when a new hire, at 6 months, is asking for more interesting work or a quieter location, the manager’s perspective is that she “just got here”. Six months times 1/20 is 1.3 weeks. That’s manager time. This explains the insufferably slow progress most people experience in their corporate careers. Typical management expects 3 to 5 years of dues-paying (in manager time, the length of a college semester) before a person is “proven” enough to start asking for things. Most people, of course, aren’t willing to wait 5 years to get a decent working space or autonomy over the projects they take on.

A typical company, as it sees its job, is to create a Prevailing Discomfort so that a manager can play “Good Cop” and grant favors: projects with more career upside, work-from-home arrangements, and more productive working spaces. Immediate managers never fire people; the company does “after careful review” of performance (in a “peer review” system wherein, for junior people, only managerial assessments are given credence). “Company policy” takes the Bad Cop role. Ten percent of employees must be fired each year because “it’s company policy”. No employee can transfer in the first 18 months because of “company policy”. (“No, your manager didn’t directly fuck you over. We have a policy of fucking over the least fortunate 10% and your manager simply chose not to protect you.”) Removal of the discomfort is to be doled out (by managers) as a reward for high-quality work. However, for a manager to fight to get these favors for reports is exhausting, and managers understandably don’t want to do this for people “right away”. The result is that these favors are given out very slowly, and often taken back during “belt-tightening” episodes, which means that the promised liberation from these annoying discomforts never really comes.

One of the more amusing things about the Discomfort Is Good regime is that it actually encourages the sorts of behaviors it’s supposed to curtail. Mean-spirited performance review systems don’t improve low performers; they create them by turning the unlucky into an immobile untouchable class with an axe to grind, and open-plan offices allow the morale toxicity of disengaged employees to spread at a rapid rate. Actually, my experience has been that workplace slacking is more common in open-plan offices. Why? After six months in open-plan office environments, people learn the tricks that allow them to appear productive while focusing on things other than work. Because such environments are exhausting, these are necessary survival adaptations, especially for people who want to be productive before or after work. In a decent office environment, a person who needed a 20-minute “power nap” could take one. In the open-plan regime, the alternative is a two-hour “zone out” that’s not half as effective.

The Discomfort Is Good regime is as entrenched in many technology startups as in large corporations, because it emerges out of a prevailing, but wrong, attitude among the managerial caste (from which most VC-istan startup founders, on account of the need for certain connections, have come). One of the first things that douchebags learn in Douchebag School is to make their subordinates “hungry”. It’s disgustingly humorous to watch them work to inflict discomfort on others– it’s transparent what they are trying to do, if one knows the signs– and be repaid by the delivery of substandard work product. Corporate America, at least in its current incarnation, is clearly in decline. While it sometimes raises a chuckle to see decay, I thought I would relish this more as I watched it happen. I expected pyrotechnics and theatrical collapses, and that’s clearly not the way this system is going to go. This one won’t go out with an explosive bang, but with the high-pitched hum of irritation and discomfort.

Why I wiped my LinkedIn profile

I wiped my LinkedIn profile recently. It now says:

I don’t reveal history without a reason, so my past jobs summary is blank.

I’m a New York-based software engineer who specializes in functional programming, machine learning, and language design.

This might not be the best move for my career. I’m mulling over whether I should delete the profile outright, rather than leaving a short note that appears cagey. I have a valid point– it really isn’t the rest of the world’s business what companies I have worked for– but I’m taking an unusual position that leaves me looking like a “tinfoiler”. For that, I’m honestly not, but I do believe in personal privacy. Privacy’s value is insurance against low-probability, high-impact harms. I don’t consider it likely that I’ll ever damage myself by publicly airing past employment history. It’s actually very unlikely. But why take the chance? I am old enough to know that not all people in the world are good, and this fact requires caution in the sharing of information, no matter how innocuous it might seem.

Consistency risk

My personal belief is that more people will damage their careers through respectable avenues such as LinkedIn than on Facebook, the more classic “digital dirt” culprit. For most jobs, no one is going to care what a now-35 software engineer said when he was 19 about getting drunk. Breaking news: all adults were teenagers, and teenagers are sometimes stupid! On the other hand, people could be burned by inconsistencies between two accounts of their career histories. Let’s say that someone’s CV says “March 2003 – February 2009″ while his LinkedIn profile says “March 2003 – November 2008“. Uh-oh. HR catches this discrepancy, flags it, and brings the candidate in for a follow-on interview, and the candidate discloses that he was on severance (and technically employed, but with no responsibilities) for 3 months. There was no lie. It was a benign difference of accounting. Still, the candidate has now disclosed receipt of a severance payment. There’s a story there. Whoops. In a superficial world, that could mean losing the job offer.

This isn’t a made-up story. The dates were different, but I know someone who ended up having to disclose a termination because of an inconsistency of this kind. (LinkedIn, in the case of which I’m aware, wasn’t the culprit.) So consistency risk is real.

Because the white-collar corporate world has so little in the way of actual ethics, the appearance of being ethical is extremely important. Even minor inconsistencies admit a kind of scrutiny that no one wishes to tolerate. This career oversharing that a lot of young people are participating in is something I find quite dangerous. Not everything that can damage a person’s reputation is a drunk picture. Most threats and mistakes are more subtle than that, and consistency risk is a big deal.

Replicating a broken system

My ideological issue, however, with LinkedIn isn’t the risk that’s involved. I’ll readily concede that those risks are very mild for the vast majority of people. The benefits of using such a service quite possibly outweigh them. The bigger problem I have with it is that it exists to replicate broken ways of doing things.

In 2013, the employment market is extremely inefficient in almost all domains, whether we’re talking about full-time jobs, consulting gigs, or startup funding. It’s a system so broken that no one trusts it, and when people distrust front-door channels or find them clogged and unusable, they retreat to back-door elitism and nepotism. Too much trust is given to word-of-mouth references (that are slow to travel, unreliable, and often an artifact of a legal settlement) and low-quality signals such as educational degrees, prestige of prior employers, and durations of employment. Local influences have a pernicious effect, the result of which is unaffordable real estate in virtually any location where a career can be built. Highly-qualified people struggle to find jobs– especially their first engagements– while companies complain of a dearth of appropriate talent. They’re both right, in a way. This is a matching problem related to the “curse of dimensionality“. We have a broken system that no one seems to know how to fix.

LinkedIn, at least in this incarnation, is an online implementation of the old-style, inefficient way of doing things. If you want an impressive profile, you have to troll for, trade, and if you’ve had a bad separation, use the legal system to demand in a settlement, recommendations and endorsements. You list the companies where you worked, job titles, and dates of employment, even if you honestly fucking hate some of those companies. We’ve used the Internet to give wings to an antiquated set of mechanics for evaluating other people, when we should be trying to do something better.

None of this is intended as a slight against LinkedIn itself. It’s a good product, and I’m sure they’re a great company. I just have an ideological dislike– and I realize that I hold a minority opinion– for the archaic and inefficient way we match people to jobs. It doesn’t even work anymore, seeing as most resumes are read for a few seconds then discarded.

Resumes are broken in an especially irritating way, because they often require people to retain a lasting association with an organization that may have behaved in a tasteless way. I have, most would say, a “good” resume. It’s better than what 98 percent of people my age have: reputable companies, increasing scope of responsibility. Yet, it’s a document through which I associate my name with a variety of organizations. Some of these I like, and some I despise. There is one for which I would prefer for the world never to know that I was associated with it. Of course, if I’m asked, “Tell me about your experience at <X>” in a job interview, for certain execrable values of X, social protocol forbids me from telling the truth.

I’ll play by the rules, when I’m job searching. I’ll send a resume, because it’s part of the process. Currently, however, I’m not searching. This leaves me with little interest in building an online “brand” in a regime vested in the old, archaic protocols. Trolling for endorsements, in my free time, when I’m employed? Are you kidding me?

The legitimacy problem

Why do I so hate these “old, archaic protocols”? It’s not that I have a problem, personally. I have a good resume, strong accomplishments for someone of my age, and I can easily get solid recommendations. I have no need to have a personal gripe here. What bothers me is something else, something philosophical that doesn’t anger a person until she thinks of it in the right way. It’s this: any current matching system between employers and employees has to answer questions regarding legitimacy, and the existing one gets some core bits seriously wrong.

What are the most important features of a person’s resume? For this exercise, let’s assume that we’re talking about a typical white-collar office worker, at least 5 years out of school. Then I would say that “work experience” trumps education, even if that person has a Harvard Ph.D. What constitutes “work experience”? There’s some degree of “buzzword compliance”, but that factor I’m willing to treat as noise. Sometimes, that aspect will go in a candidate’s favor, and sometimes it won’t, but I don’t see it conferring a systemic advantage. I’m also going to say that workplace accomplishments mean very little. Why? Because an unverifiable line on a resume (“built awesome top-secret system you’ve never heard of”) is going to be assumed, by most evaluators, to be inflated and possibly dishonest. So the only bits of a resume that will be taken seriously are the objectively verifiable ones. This leaves:

  • Company prestige. That’s the big one, but it’s also ridiculously meaningless, because prestigious companies hire idiots all the time. 
  • Job titles. This is the trusted metric of professional accomplishment. If you weren’t promoted for it, it didn’t happen.
  • Length of tenure. This one’s nonlinear, because short tenures are embarrassing, but long stints without promotions are equally bad.
  • Gaps in employment. Related to the above, large gaps in job history make a candidate unattractive.
  • Salary history, if a person is stupid enough to reveal it.
  • Recommendations, preferably from management.

There are other things that matter, such as overlap between stated skills and what a particular company needs, but when it comes to “grading” people, look no farther than the above. Those factors determine where a person’s social status starts in the negotiation. Social status isn’t, of course, the only thing that companies care about in hiring… but it’s always advantageous to have it in one’s favor.

What’s disgusting and wrong about this regime is that all of these accolades come from a morally illegitimate source: corporate management. That’s where job titles, for example, come from. They come from a caste of high priests called “managers” who are anointed by a higher caste called “executives” who derive their legitimacy from a pseudo-democracy of shareholders who (while their financial needs and rights deserve respect) honestly haven’t a clue how to run a company. Now, I wouldn’t advise people to let most corporate executives around their kids, because I’ve known enough in my life to know that most of them aren’t good people. So why are we assigning legitimacy to evaluations coming from such an unreliable and often corrupt source? It makes no sense. It’s a slave mentality.

First scratch at a solution

I don’t think resumes scale. They provide low-signal data, and that fails us in a world where there are just so many of the damn things around that a sub-1% acceptance rate is inevitable. I’m not faulting companies for discarding most resumes that they get. What else would they be expected to do? Most resumes come from unqualified candidates who bulk-mail them. Now that it’s free to send a resume anywhere in the world, a lot of people (and recruiters) spam, and that clogs the channels for everyone. The truth, I think, is that we need to do away with resumes– at least of the current form– altogether.

That’s essentially what has happened in New York and Silicon Valley. You don’t look for jobs by sending cold resumes. You can try it, but it’s usually ineffective, even if you’re one of those “rock star” engineers who is always in demand. Instead, you go to meetups and conferences and meet people in-person. That approach works well, and it’s really the only reliable way to get leads. This is less of an option for someone in Anchorage or Tbilisi, however. What we should be trying to do with technology is to build these “post-resume” search avenues on the Internet– not the same old shit that doesn’t work.

So, all of this said, what are resumes good for? I’ve come to the conclusion that there is one very strong purpose for resumes, and one that justifies not discarding the concept altogether. A resume is a list of things one is willing to be asked about in the context of a job interview. If you put Scala on your resume, you’re making it clear that you’re confident enough in your knowledge of that language to take questions about it, and possibly lose a job offer if you actually don’t know anything about it. I think the “Ask me about <X>” feature of resumes is probably the single saving grace of this otherwise uninformative piece of paper.

If I were to make a naive first scratch at solving this problem, here’s how I’d “futurize” the resume. Companies, titles, and dates all become irrelevant. Leave that clutter off. Likewise, I’d ask that companies drop the requirement nonsense where they put 5 years of experience in a 3-year-old technology as a “must have” bullet point. Since requirement sprawl is “free”, it occurs, and few people actually meet any sufficiently long requirement set to the letter, so that seems to select against people who actually read the requirements. Instead, here’s the lightweight solution: allocate 20 points. (The reason for the number 20 is to impose granularity; fractional points are not allowed.) For example, an engineering candidate might put herself forward like so:

  • Machine learning: 6
  • Functional programming: 5
  • Clojure: 3
  • Project management: 3
  • R: 2
  • Python: 1

These points might seem “meaningless”, because there’s no natural unit for them. but they’re not. What they show, clearly, is that a candidate has a clear interest (and is willing to be grilled for knowledge) in machine learning and functional programming, moderate experience in project management and with Clojure, and a little bit of experience in Python and R. There’s a lot of information there, as long as the allocation of points is done in good faith and, if not, that person won’t pass many interviews. Job requirements would be published in the same way: assign importance to the things according to how much they really matter, and keep the total at 20 points.

Since the points have different meanings on each side– for the employee, they represent fractions of experience; for the company, they represent relative importance– it goes without saying that a person who self-assigns 5 points in a technology isn’t ineligible for a job posting that places an importance of 6 for that technology. Rather, it indicates that there’s a rough match in how much weight each party assigns to that competency. This data could be mined to match employees to job listings for initial interviews and, quite likely, this approach (while imperfect) would perform better than the existing resume-driven regime. What used to involve overwhelmed gatekeepers is now a “simple matter” of unsupervised learning.

There is, of course, an obvious problem with this, which is that some people have more industry experience and “deserve” more points. An out-of-college candidate might only deserve 10 points, while a seasoned veteran should get 40 or 50. I’ll admit that I haven’t come up with a good solution for that. It’s a hard problem, because (a) one wants to avoid ageism, while (b) the objective here is sparseness in presentation, and I can’t think of a quick solution that doesn’t clutter the process up with distracting details. What I will concede is that, while some people clearly deserve more points than others do, there’s no fair way to perform that evaluation at an individual level. The job market is a distributed system with numerous adversarial agents, and any attempt to impose a global social status over it will fail, both practically and morally speaking.

Indeed, if there’s something that I find specifically despicable about the current resume-and-referral-driven job search culture, it’s in the attempt to create a global social status when there’s absolutely no good reason for one to exist.

Fourth quadrant work

I’ve written a lot about open allocation, so I think it’s obvious where I stand on the issue. One of the questions that is always brought up in that discussion is: so who answers the phones? The implicit assumption, with which I don’t agree, is that there are certain categories of work that simply will not be performed unless people are coerced into doing it. To counter this, I’m going to answer the question directly. Who does the unpleasant work in an open-allocation company? What characterizes the work that doesn’t get done under open allocation?

First, define “unpleasant”. 

Most people in most jobs dislike going to work, but it’s not clear to me how much of that is an issue of fit as opposed to objectively unpleasant work. The problem comes from two sources. First, companies often determine their project load based on “requirements” whose importance is assessed according to the social status of the person proposing it rather than any reasonable notion of business, aesthetic, or technological value, so that generates a lot of low-yield busywork that people prefer to avoid because it’s not very important. Second, companies and hiring managers tend to be ill-equipped at matching people to their specialties, especially in technology. Hence, you have machine learning experts working on payroll systems. It’s not clear to me, however, that there’s this massive battery of objectively undesirable work on which companies rely. There’s probably someone who’d gladly take on a payroll-system project as an excuse to learn Python.

Additionally, most of what makes work unpleasant isn’t the work itself but the subordinate context: nonsensical requirements, lack of choice in one’s tools, and unfair evaluation systems. This is probably the most important insight that a manager should have about work: most people genuinely want to work. They don’t need to be coerced, and doing that will only reduce their intrinsic incentives in the long run. In that light, open allocation’s mission is to remove the command system that turns work that would otherwise be fulfilling into drudgery. Thus, even if we accept that there’s some quantity of unpleasant work that any company will generate, it’s likely that the amount of it will decrease under open allocation, especially as people are freed to find work that fits their interests and specialty. What’s left is work that no one wants to do: a smaller set of the workload. In most companies, there isn’t much of that work to go around, and it can almost always be automated.

The Four Quadrants

We define work as interesting if there are people who would enjoy doing it or find it fulfilling– some people like answering phones– and unpleasant if it’s drudgery that no one wants to do. We call work essential if it’s critical to a main function of the business– money is lost in large amounts if it’s not completed, or not done well– and discretionary if it’s less important. Exploratory work and support work tend to fall into the “discretionary” set. These two variables split work into four quadrants:

  • First Quadrant: Interesting and essential. This is work that is intellectually challenging, reputable in the job market, and important to the company’s success. Example: the machine learning “secret sauce” that powers Netflix’s recommendations or Google’s web search.
  • Second Quadrant: Unpleasant but essential. These tasks are often called “hero projects”. Few people enjoy doing them, but they’re critical to the company’s success. Example: maintaining or refactoring a badly-written legacy module on which the firm depends.
  • Third Quadrant: Interesting but discretionary. This type of work might become essential to the company in the future, but for now, it’s not in the company’s critical path. Third Quadrant work is important for the long-term creative health of the company and morale, but the company has not been (and should not be) bet on it.  Example: robotics research in a consumer web company.
  • Fourth Quadrant: Unpleasant and discretionary. This work isn’t especially desirable, nor is it important to the company. This is toxic sludge to be avoided if possible, because in addition to being unpleasant to perform, it doesn’t look good in a person’s promotion packet. This is the slop work that managers delegate out of a false perception of a pet project’s importance. Example: at least 80 percent of what software engineers are assigned at their day jobs.

The mediocrity that besets large companies over time is a direct consequence of the Fourth Quadrant work that closed allocation generates. When employees’ projects are assigned, without appeal, by managers, the most reliable mechanism for project-value discovery– whether capable workers are willing to entwine their careers with it– is shut down. The result, under closed allocation, is that management does not get this information regarding what projects the employees consider important, and therefore won’t even know what the Fourth Quadrant work is. Can they recover this “market information” by asking their reports? I would say no. If the employees have learned (possibly the hard way) how to survive a subordinate role, they won’t voice the opinion that their assigned project is a dead end, even if they know it to be true.

Closed allocation simply lacks the garbage-collection mechanism that companies need in order to clear away useless projects. Perversely, companies are much more comfortable with cutting people than projects. On the latter, they tend to be “write-only”, removing projects only years after they’ve failed. Most of the time, when companies perform layoffs, they do so without reducing the project load, expecting the survivors to put up with an increased workload. This isn’t sustainable, and the result often is that, instead of reducing scope, the company starts to underperform in an unplanned way: you get necrosis instead of apoptosis.

So what happens in each quadrant under open allocation? First Quadrant work gets done, and done well. That’s never an issue in any company, because there’s no shortage of good people who want to do it. Third Quadrant work also gets enough attention, likewise, because people enjoy doing it. As for Second Quadrant work, that also gets done, but management often finds that it has to pay for it, in bonuses, title upgrades, or pay raises. Structuring such rewards is a delicate art, since promotions should represent respect but not confer power that might undermine open allocation. However, I believe it can be done. I think the best solution is to have promotions and a “ladder”, but for its main purpose to be informing decisions about pay, and not an excuse to create power relationships that make no sense.

So, First and Third Quadrant work are not a problem under open allocation. That stuff is desirable and allocates itself. Second Quadrant work is done, and well, but expensive. Is this so bad, though? The purpose of these rewards is to compensate people for freely choosing work that would otherwise be averse to their interests and careers. That seems quite fair to me. Isn’t that how we justify CEO compensation? They do risky work, assume lots of responsibilities other people don’t want, and are rewarded for it? At least, that’s the story. Still, a “weakness” of open allocation is that it requires management to pay for work that they could get “for free” in a more coercive system. The counterpoint is that coerced workers are generally not going to perform as well as people with more pleasant motivations. If the work is truly Second Quadrant, it’s worth every damn penny to have it done well.

Thus, I think it’s a fair claim that open allocation wins in the First, Second, and Third Quadrant. What about the Fourth? Well, under open allocation, that stuff doesn’t get done. The company won’t pay for it, and no one is going to volunteer to do it, so it doesn’t happen. The question is: is that a problem?

I won’t argue that Fourth Quadrant work doesn’t have some value, because from the perspective of the business, it does. Fixing bugs in a dying legacy module might make its demise a bit slower. However, I would say that the value of most Fourth Quadrant work is low, and much of it is negative in value on account of the complexity that it imposes, in the same way that half the stuff in a typical apartment is of negative value. Where does it come from, and why does it exist? The source of Fourth Quadrant work is usually a project that begins as a Third Quadrant “pet project”. It’s not critical to the business’s success, but someone influential wants to do it and decides that it’s important. Later on, he manages to get “head count” for it: people who will be assigned to complete the less glamorous work that this pet project generates as it scales; or, in other words, people whose time is being traded, effectively, as a political token. If the project never becomes essential but its owner is active enough in defending it to keep it from ever being killed, it will continue to generate Fourth Quadrant work. That’s where most of this stuff comes from. So what is it used for? Often, companies allocate Third Quadrant work to interns and Fourth Quadrant work to new hires, not wanting to “risk” essential work on new people. The purpose is evaluative: to see if this person is a “team player” by watching his behavior on relatively unimportant, but unattractive, work. It’s the “dues paying” period and it’s horrible, because a bad review can render a year or two of a person’s working life completely wasted.

Under open allocation, the Fourth Quadrant work goes away. No one does any. I think that’s a good thing, because it doesn’t serve much of a purpose. People should be diving into relevant and interesting work as soon as they’re qualified for it. If someone’s not ready to be working on First and Second Quadrant (e.g. essential) work, then have that person in the Third Quadrant until she learns the ropes.

Closed-allocation companies need the Fourth Quadrant work because they hire people but don’t trust them. The ideology of open allocation is: we hired you, so we trust you to do your best to deliver useful work. That doesn’t mean that employees are given unlimited expense accounts on the first day, but it means that they’re trusted with their own time. For a contrast, the ideology of closed allocation is: just because we’re paying you doesn’t mean we trust, like, or respect you; you’re not a real member of the team until we say you are. This brings us to the real “original sin” at the heart of closed allocation: the duplicitous tendency of growing-too-fast software industries to hire before they trust.

Careerism breeds mediocrity

A common gripe of ambitious people is the oppressive culture of mediocrity that almost everyone experiences at work: boring tasks, low standards, risk aversion, no appetite for excellence, and little chance to advance. The question is often asked: where does all this mediocrity come from? Obviously, there are organizational forces– risk-aversion, subordination, seniority– that give it an advantage, but what might be an individual-level root cause that brings it into existence in the first place? What makes people preternaturally tolerant of mediocrity, to such a degree that large organizations converge to it? Is it just that “most people are mediocre”? Certainly, anyone can become complacent and mediocre, given sufficient reward and comfort, but I don’t think it’s a natural tendency of humans. In fact, I believe it leaves us dissatisfied and, over the long run, disgusted with working life.

Something I’ve learned over the years about the difference between mediocrity and excellence is that the former is focused on ”being” and what one is, while the latter is about doing and what one creates or provides. Mediocrity wants to be attractive or important or socially well-connected. Excellence wants to create something attractive or perform an important job. Mediocrity wants to “be smart” and for everyone to know it. Excellence wants to do smart things. Mediocrity wants to be well-liked. Excellence wants to create things worth liking. Mediocrity wants to be one of the great writers. Excellence wants to write great works. People who want to hold positions, acquire esteem, and position their asses in specific comfortable chairs tend to be mediocre, risk-averse, and generally useless. The ones who excel are those who go out with the direct goal of achieving something.

The mediocrity I’ve described above is the essence of careerism: acquiring credibility, grabbing titles, and taking credit. What’s dangerous about this brand of mediocrity is that, in many ways, it looks like excellence. It is ambitious, just toward different ends. Like junk internet, it feels like real work is getting done. In fact, this variety of mediocrity is not only socially acceptable but drilled into children from a young age. It’s not “save lives and heal the sick” that many hear growing up, but “become a doctor”.

This leads naturally to an entitlement mentality, for what is a title but a privilege of being? Viscount isn’t something you do. It’s something you are, either by birth or by favor. Upper-tier corporate titles are similar, except with “by favor” being common because it must at least look like a meritocracy when, in truth, the proteges and winners have been picked at birth.

Corporations tend to be risk-averse and pathological, to such a degree that opportunities to excel are rare, and therefore become desirable. Thus, they’re allocated as a political favor. To whom? To people who are well-liked and have the finest titles. To do something great in a corporate context– to even have the permission to use your own time in such a pursuit– one first has to be something: well-titled, senior, “credible”. You can’t just roll up your sleeves and do something useful and important, lest you be chastised for taking time away from your assigned work. It’s ridiculous! Is it any wonder that our society has such a pervasive mentality of entitlement? When being something must occur before doing anything, there is no other way for people to react.

As I get older, I’m increasingly negative on the whole concept of careerism, because it makes being reputable (demonstrated through job titles) a prerequisite for doing something useful, and thereby generates a culture of entitled mediocrity, because its priorities lead naturally that way. What looks like ambition is actually a thin veneer over degenerate, worthless social climbing. Once people are steeped in this culture for long enough, they’re too far gone and real ambition has been drained from them forever.

This, I think, is Corporate America’s downfall. In this emasculated society, almost no one wants to do any real work– or to let anyone else do real work– because that’s not what gets rewarded, and to do anything that’s actually useful, one has to be something (in the eyes of others) first. This means that the doers who remain tend to be the ones who are willing to invest years in the soul-sucking social climbing and campaigning required to get there, and the macroscopic result of this is adverse selection in organizational leadership. Over time, this leaves organizations unable to adapt or thrive, but it takes decades for that process to run its course.

What’s the way out? Open allocation. In a closed-allocation dinosaur company, vicious political fights ensue about who gets to be “on” desirable projects. People lose sight of what they can actually do for the company, distracted by the perpetual cold war surrounding who gets to be on what team. You don’t have this nonsense in an open-allocation company. You just have people getting together to get something important done. The way out is to remove the matrix of entitlement, decisively and radically. That, and probably that alone, will evade the otherwise inevitable culture of mediocrity that characterizes most companies.

Never sign a PIP. Here’s why.

I alluded to this topic in Friday’s post, and now I’ll address it directly. This search query comes to my blog fairly often: “should I sign a PIP?” The answer is no.

Why? Chances are, performance doesn’t mean what you think it does. Most people think of “performance improvement” as something well-intended, because they take performance to mean “how good I am at my job”. Well, who doesn’t want to get better at his or her job? Even the laziest people would be able to get away with more laziness if they were more competent. Who wouldn’t want to level up? Indeed, that’s how these plans are presented: as structures to help someone improve professional capability. However, that’s not what “performance” means in the context of a employment contract. When a contract exists, non-performance is falling short of an agreed-upon provision of the contract. It doesn’t mean that the contract was fulfilled but in a mediocre way. It means that the contract was breached.

So when someone signs a PIP, he might think he’s agreeing that, “Yeah, I could do a few things better.” That’s not what he’s actually saying, at least not to the courts. He’s agreeing to be identified as a non-performing– again, in the legal sense of the word– employee, in the same category as one who doesn’t show up or who breaks fundamental ethical guidelines. Signing a PIP isn’t an admission that one could have been better at one’s job, but that one wasn’t doing one’s job. Since white-collar work is subjective and job descriptions are often ill-defined, making the binary question of professional and contractual performance difficult to assess in the first place, this sort of admission is gold for an employer looking to fire someone without paying severance. The employer will have a hell of a time proving contractual non-performance (which is not strictly required in order to fire someone, but makes the employer’s case stronger) without such a signature, given that most white-collar work has ill-defined requirements and performance measures.

Managers often claim that signing such paperwork only constitutes admission to having read it, not agreeing to the assessment. Even if true, it’s still a bad idea to sign it. This is now an adversarial relationship, which means that what makes the manager’s work (in the firing process) easier makes your life worse. Verbally, you should say “I agree to perform the duties requested of me, and to make the changes indicated, to the best of my ability, but there are factual inaccuracies and I cannot sign this without speaking to an attorney.” If you are pressed to sign, or threatened with termination, then you may sign it, but include the words “under duress”. (The shorthand “u.d.” will not suffice.) What this means is that, since you were threatened with summary termination, you were not free to decline the PIP, and therefore your signature is meaningless.

Whether you sign the PIP or not, you will probably be fired in time, unless you get another job before the process gets to that point. Not signing it doesn’t make it impossible for them to fire you. It only makes it somewhat harder. So why is it harmful to sign it? You want two things. First, you want time. The longer you have to look for a new job while being employed the old company, the better. If your manager sees you as a risk of messy termination, he’s more likely to let you leave on your own terms because it generates minimal work for him. PIPs are humiliating and appear to be an assertion of power, but they’re an exhausting slog for a manager. Second, you want severance if you don’t find a job in time and do get fired. Severance should never be your goal– non-executives don’t get large severances, so it’s generally better for your career and life to get another job– but you shouldn’t give that away for free.

There isn’t any upside to signing the PIP because, once one is presented, the decision to fire has already been made. A manager who genuinely wants to improve a person’s performance will communicate off the record. Once the manager is “documenting”, the relationship’s over. Also, people very rarely pass PIPs. Some people get the hint and leave, others fail and are fired, and in the remainder of cases, the PIP is ruled “inconclusive”, which means “not enough evidence to terminate at this time”. That’s not exactly an endorsement. For a PIP to be passed would require HR to side with the employee over management, and that will never happen. If the employee is under the same manager in 6 months, there will be another PIP. If the employee tries to move to another team, that will be almost impossible, because a “passed” PIP doesn’t mean exoneration. The reputation for instability created by a PIP lingers on for years. What I am essentially saying here is that, once a PIP appears, you should not sign it for the sake of maintaining a professional relationship. There is no relationship at that point.

Signing the PIP means you don’t know how to play the termination endgame. It means that you have no idea what’s happening to you, and you can be taken advantage of.

This said, there’s a way of not signing it. If you appear to be declining to sign out of bitterness or anger, that doesn’t work in your favor. Then you come off as childish. Childish people are easy to get rid of: just put them in increasingly ridiculous circumstances until they lose their cool and fire themselves by doing something stupid, like throwing a stapler at someone. The image you want to project is of a confident, capable adult– one who will not sign the PIP, because he knows it’s not in his advantage to do so, and who knows his rights under the law. This makes you intimidating. You don’t want to frighten adversaries like this– a frightened enemy is dangerous unpredictable– but you do want to intimidate them, so they get out of your way.

There’s a lot more to say about PIPs, which are dishonestly named processes since their real purpose is to create a paper trail to justify firing someone. That I’ll cover in a future post. For now, I think I’ve covered the most important PIP question. Don’t sign the fucking thing. Be professional (that’s more intimidating than being bitter) but decline to sign and, as fast as you can, get another job. If you see a PIP, moving on is your job.