Leadership: everyone is a main character

I tried to be a creative writer when I was younger. I was not very good at it. On the technical aspects of writing, I was solid. My problem was that I didnt understand people well enough to write a convincing character. Social adeptness isnt a prerequisite for being a strong writer, but a basic understanding of how most people think is necessary, and I didnt have that. One thing I remember, and I think it was my mother who first said to me: every character thinks of him- or herself as a main character. Thats what makes Rosencrantz and Guildenstern Are Dead (the play that begins by ribbing every Bayesian in the audience) great. In Hamlet, these guys are a sideshow, and can be disregarded as such (theyre symbolic of the princes sunnier past, now betrayers and otherwise irrelevant). From their perspective, they arent. Tom Stoppards play reminds us that whats remembered as one story, told in truth by its main protagonist, is actually one of a thousand possible, related but very different, tales.

Ive worked in an ungodly number of companies for someone my age, and Ive seen leadership done well and not-well, but only in the past 24 months have I really started to understand it. Why can some people lead, and others not? Why was I once so bad at leading other people? I think the core insight is the above. To lead people, you have to understand them and what they want, which usually requires recognition of the fact that they dont see themselves as followers innately. Theyll follow, if they can lead in other ways.

I dont intend for this to devolve into an everyone is special claim. Thats bullshit. I could say, everyone is special because every point in a high-dimensional space sees itself as an outlier, but that would be beside the point. Im not talking about capability or importance to any specific organization, but perspective only. I think most people are realistic about their importance to the various organizations they inhabit, which is often very low if those organizations are big. However, few people see themselves as supporting actors in life itself.

The problem that a lot of visionaries have is that they lose sight of this entirely. They expect others to subsume their lives into an ambitious idea, as they have done themselves, while failing to understand that others arent going to commit the same dedication unless they can take some ownership of the idea. The visionaries tend to cast themselves as the main players the saviors, architects, deal-makers and warriors while most of the others, within that organization, are supporting cast. Since they will never see themselves as subordinates in truth, there will always be a disconnect that disallows them from willingly putting their entire being into the vision or organization. Theyll subordinate a little bit of work, if they get paid for it. They wont subordinate themselves. Not for real.

This bias toward viewing others as supporting actors is dangerous. The problem isnt only that it can lead one to mistreat other people. Worse than that, it can lead toward systematically underestimating them, being underprepared for them to pursue their own agendas, and feeling a sense of betrayal when that happens. The worst thing about visionaries, in my view, isnt how they treat their supporters. Some are good in that way, and some are not, but only a couple that Ive known were reliably indecent. The problem with them, instead, is that they tend to be easy prey for sycophantic subordinates who often have bad intentions. They see the golden child as a docile, eager protege or a lesser version of themselves, when often that person is an exploitative sociopath who knows how to manipulate narcissists. Visionaries tend to see their organizations as benevolent dictatorships with no politics (its never politics to them because theyre in charge) while their organizations are often riddled with young-wolf conflicts that theyve inadvertently encouraged.

I dont know what the right answer is here. The word empathy comes to mind, but thats not what I mean. Reading emotions is one skill, but sometimes I wonder if its even possible to really get (as in understand) another person. I dont believe that, as humans, we really have the tools for that. Alternatively, people often talk about putting themselves in someone elses shoes (I could do a Carlin-esque rant about the colloquialism, but Ill skip it) but that is extremely difficult to do, because most people, when they attempt to do so, are heavily influenced by their own biases about that person.

Likewise, I cant claim in all honesty that any of this is my strong suit. Technical, intellectual, and cultural leadership are a 10/10 fit with my skill set and talents. On the other hand, leadership of people is still a major area for development. The somewhat cumbersome store of knowledge (and, I make no pretenses here, I do not claim the skills necessary to apply it) that I have on the topic is not from preternatural excellence (trust me, I dont have such) but from watching others fail so badly at it. I owe everything I know on the topic to those who, unlike me, were promoted before they outgrew their incompetence. So there are many questions here where I dont know the answers, and some where I dont even know the right questions, but I think I know where to begin, and that goes back to what every novelist must keep in mind: everyone is, from his or her own perspective, a main character.

Leadership: everyone is a main character

I tried to be a creative writer when I was younger. I was not very good at it. On the technical aspects of writing, I was solid. My problem was that I didnt understand people well enough to write a convincing character. Social adeptness isnt a prerequisite for being a strong writer, but a basic understanding of how most people think is necessary, and I didnt have that. One thing I remember, and I think it was my mother who first said to me: every character thinks of him- or herself as a main character. Thats what makes Rosencrantz and Guildenstern Are Dead (the play that begins by ribbing every Bayesian in the audience) great. In Hamlet, these guys are a sideshow, and can be disregarded as such (theyre symbolic of the princes sunnier past, now betrayers and otherwise irrelevant). From their perspective, they arent. Tom Stoppards play reminds us that whats remembered as one story, told in truth by its main protagonist, is actually one of a thousand possible, related but very different, tales.

Ive worked in an ungodly number of companies for someone my age, and Ive seen leadership done well and not-well, but only in the past 24 months have I really started to understand it. Why can some people lead, and others not? Why was I once so bad at leading other people? I think the core insight is the above. To lead people, you have to understand them and what they want, which usually requires recognition of the fact that they dont see themselves as followers innately. Theyll follow, if they can lead in other ways.

I dont intend for this to devolve into an everyone is special claim. Thats bullshit. I could say, everyone is special because every point in a high-dimensional space sees itself as an outlier, but that would be beside the point. Im not talking about capability or importance to any specific organization, but perspective only. I think most people are realistic about their importance to the various organizations they inhabit, which is often very low if those organizations are big. However, few people see themselves as supporting actors in life itself.

The problem that a lot of visionaries have is that they lose sight of this entirely. They expect others to subsume their lives into an ambitious idea, as they have done themselves, while failing to understand that others arent going to commit the same dedication unless they can take some ownership of the idea. The visionaries tend to cast themselves as the main players the saviors, architects, deal-makers and warriors while most of the others, within that organization, are supporting cast. Since they will never see themselves as subordinates in truth, there will always be a disconnect that disallows them from willingly putting their entire being into the vision or organization. Theyll subordinate a little bit of work, if they get paid for it. They wont subordinate themselves. Not for real.

This bias toward viewing others as supporting actors is dangerous. The problem isnt only that it can lead one to mistreat other people. Worse than that, it can lead toward systematically underestimating them, being underprepared for them to pursue their own agendas, and feeling a sense of betrayal when that happens. The worst thing about visionaries, in my view, isnt how they treat their supporters. Some are good in that way, and some are not, but only a couple that Ive known were reliably indecent. The problem with them, instead, is that they tend to be easy prey for sycophantic subordinates who often have bad intentions. They see the golden child as a docile, eager protege or a lesser version of themselves, when often that person is an exploitative sociopath who knows how to manipulate narcissists. Visionaries tend to see their organizations as benevolent dictatorships with no politics (its never politics to them because theyre in charge) while their organizations are often riddled with young-wolf conflicts that theyve inadvertently encouraged.

I dont know what the right answer is here. The word empathy comes to mind, but thats not what I mean. Reading emotions is one skill, but sometimes I wonder if its even possible to really get (as in understand) another person. I dont believe that, as humans, we really have the tools for that. Alternatively, people often talk about putting themselves in someone elses shoes (I could do a Carlin-esque rant about the colloquialism, but Ill skip it) but that is extremely difficult to do, because most people, when they attempt to do so, are heavily influenced by their own biases about that person.

Likewise, I cant claim in all honesty that any of this is my strong suit. Technical, intellectual, and cultural leadership are a 10/10 fit with my skill set and talents. On the other hand, leadership of people is still a major area for development. The somewhat cumbersome store of knowledge (and, I make no pretenses here, I do not claim the skills necessary to apply it) that I have on the topic is not from preternatural excellence (trust me, I dont have such) but from watching others fail so badly at it. I owe everything I know on the topic to those who, unlike me, were promoted before they outgrew their incompetence. So there are many questions here where I dont know the answers, and some where I dont even know the right questions, but I think I know where to begin, and that goes back to what every novelist must keep in mind: everyone is, from his or her own perspective, a main character.

Leadership: everyone is a main character

I tried to be a creative writer when I was younger. I was not very good at it. On the technical aspects of writing, I was solid. My problem was that I didnt understand people well enough to write a convincing character. Social adeptness isnt a prerequisite for being a strong writer, but a basic understanding of how most people think is necessary, and I didnt have that. One thing I remember, and I think it was my mother who first said to me: every character thinks of him- or herself as a main character. Thats what makes Rosencrantz and Guildenstern Are Dead (the play that begins by ribbing every Bayesian in the audience) great. In Hamlet, these guys are a sideshow, and can be disregarded as such (theyre symbolic of the princes sunnier past, now betrayers and otherwise irrelevant). From their perspective, they arent. Tom Stoppards play reminds us that whats remembered as one story, told in truth by its main protagonist, is actually one of a thousand possible, related but very different, tales.

Ive worked in an ungodly number of companies for someone my age, and Ive seen leadership done well and not-well, but only in the past 24 months have I really started to understand it. Why can some people lead, and others not? Why was I once so bad at leading other people? I think the core insight is the above. To lead people, you have to understand them and what they want, which usually requires recognition of the fact that they dont see themselves as followers innately. Theyll follow, if they can lead in other ways.

I dont intend for this to devolve into an everyone is special claim. Thats bullshit. I could say, everyone is special because every point in a high-dimensional space sees itself as an outlier, but that would be beside the point. Im not talking about capability or importance to any specific organization, but perspective only. I think most people are realistic about their importance to the various organizations they inhabit, which is often very low if those organizations are big. However, few people see themselves as supporting actors in life itself.

The problem that a lot of visionaries have is that they lose sight of this entirely. They expect others to subsume their lives into an ambitious idea, as they have done themselves, while failing to understand that others arent going to commit the same dedication unless they can take some ownership of the idea. The visionaries tend to cast themselves as the main players the saviors, architects, deal-makers and warriors while most of the others, within that organization, are supporting cast. Since they will never see themselves as subordinates in truth, there will always be a disconnect that disallows them from willingly putting their entire being into the vision or organization. Theyll subordinate a little bit of work, if they get paid for it. They wont subordinate themselves. Not for real.

This bias toward viewing others as supporting actors is dangerous. The problem isnt only that it can lead one to mistreat other people. Worse than that, it can lead toward systematically underestimating them, being underprepared for them to pursue their own agendas, and feeling a sense of betrayal when that happens. The worst thing about visionaries, in my view, isnt how they treat their supporters. Some are good in that way, and some are not, but only a couple that Ive known were reliably indecent. The problem with them, instead, is that they tend to be easy prey for sycophantic subordinates who often have bad intentions. They see the golden child as a docile, eager protege or a lesser version of themselves, when often that person is an exploitative sociopath who knows how to manipulate narcissists. Visionaries tend to see their organizations as benevolent dictatorships with no politics (its never politics to them because theyre in charge) while their organizations are often riddled with young-wolf conflicts that theyve inadvertently encouraged.

I dont know what the right answer is here. The word empathy comes to mind, but thats not what I mean. Reading emotions is one skill, but sometimes I wonder if its even possible to really get (as in understand) another person. I dont believe that, as humans, we really have the tools for that. Alternatively, people often talk about putting themselves in someone elses shoes (I could do a Carlin-esque rant about the colloquialism, but Ill skip it) but that is extremely difficult to do, because most people, when they attempt to do so, are heavily influenced by their own biases about that person.

Likewise, I cant claim in all honesty that any of this is my strong suit. Technical, intellectual, and cultural leadership are a 10/10 fit with my skill set and talents. On the other hand, leadership of people is still a major area for development. The somewhat cumbersome store of knowledge (and, I make no pretenses here, I do not claim the skills necessary to apply it) that I have on the topic is not from preternatural excellence (trust me, I dont have such) but from watching others fail so badly at it. I owe everything I know on the topic to those who, unlike me, were promoted before they outgrew their incompetence. So there are many questions here where I dont know the answers, and some where I dont even know the right questions, but I think I know where to begin, and that goes back to what every novelist must keep in mind: everyone is, from his or her own perspective, a main character.

Leadership: everyone is a main character

I tried to be a creative writer when I was younger. I was not very good at it. On the technical aspects of writing, I was solid. My problem was that I didn’t understand people well enough to write a convincing character. Social adeptness isn’t a prerequisite for being a strong writer, but a basic understanding of how most people think is necessary, and I didn’t have that. One thing I remember, and I think it was my mother who first said to me: every character thinks of him- or herself as a main character. That’s what makes Rosencrantz and Guildenstern Are Dead (the play that begins by ribbing every Bayesian in the audience) great. In Hamlet, these guys are a sideshow, and can be disregarded as such (they’re symbolic of the prince’s sunnier past, now betrayers and otherwise irrelevant). From their perspective, they aren’t. Tom Stoppard’s play reminds us that what’s remembered as one story, told in truth by its main protagonist, is actually one of a thousand possible, related but very different, tales.

I’ve worked in an ungodly number of companies for someone my age, and I’ve seen leadership done well and not-well, but only in the past 24 months have I really started to understand it. Why can some people lead, and others not? Why was I once so bad at leading other people? I think the core insight is the above. To lead people, you have to understand them and what they want, which usually requires recognition of the fact that they don’t see themselves as followers innately. They’ll follow, if they can lead in other ways.

I don’t intend for this to devolve into an “everyone is special” claim. That’s bullshit. I could say, “everyone is special because every point in a high-dimensional space sees itself as an outlier”, but that would be beside the point. I’m not talking about capability or importance to any specific organization, but perspective only. I think most people are realistic about their importance to the various organizations they inhabit, which is often very low if those organizations are big. However, few people see themselves as supporting actors in life itself.

The problem that a lot of “visionaries” have is that they lose sight of this entirely. They expect others to subsume their lives into an ambitious idea, as they have done themselves, while failing to understand that others aren’t going to commit the same dedication unless they can take some ownership of the idea. The visionaries tend to cast themselves as the main players– the saviors, architects, deal-makers and warriors– while most of the others, within that organization, are supporting cast. Since they will never see themselves as subordinates in truth, there will always be a disconnect that disallows them from willingly putting their entire being into the vision or organization. They’ll subordinate a little bit of work, if they get paid for it. They won’t subordinate themselves. Not for real.

This bias– toward viewing others as supporting actors– is dangerous. The problem isn’t only that it can lead one to mistreat other people. Worse than that, it can lead toward systematically underestimating them, being underprepared for them to pursue their own agendas, and feeling a sense of betrayal when that happens. The worst thing about “visionaries”, in my view, isn’t how they treat their supporters. Some are good in that way, and some are not, but only a couple that I’ve known were reliably indecent. The problem with them, instead, is that they tend to be easy prey for sycophantic subordinates who often have bad intentions. They see the “golden child” as a docile, eager protege or a lesser version of themselves, when often that person is an exploitative sociopath who knows how to manipulate narcissists. Visionaries tend to see their organizations as benevolent dictatorships with “no politics” (it’s never politics to them because they’re in charge) while their organizations are often riddled with young-wolf conflicts that they’ve inadvertently encouraged.

I don’t know what the right answer is here. The word “empathy” comes to mind, but that’s not what I mean. Reading emotions is one skill, but sometimes I wonder if it’s even possible to really “get” (as in understand) another person. I don’t believe that, as humans, we really have the tools for that. Alternatively, people often talk about putting themselves “in someone else’s shoes” (I could do a Carlin-esque rant about the colloquialism, but I’ll skip it) but that is extremely difficult to do, because most people, when they attempt to do so, are heavily influenced by their own biases about that person.

Likewise, I can’t claim in all honesty that any of this is my strong suit. Technical, intellectual, and cultural leadership are a 10/10 fit with my skill set and talents. On the other hand, leadership of people is still a major area for development. The somewhat cumbersome store of knowledge (and, I make no pretenses here, I do not claim the skills necessary to apply it) that I have on the topic is not from preternatural excellence (trust me, I don’t have such) but from watching others fail so badly at it. I owe everything I know on the topic to those who, unlike me, were promoted before they outgrew their incompetence. So there are many questions here where I don’t know the answers, and some where I don’t even know the right questions, but I think I know where to begin, and that goes back to what every novelist must keep in mind: everyone is, from his or her own perspective, a main character.

“Job hopping” is often fast learning and shouldn’t be stigmatized

One of the traits of the business world that I’d love to see die, even if it were to take people with it, is the stigma against “job hopping”. I don’t see how people can not see that this is oppression, plain and simple. The stigma exists to deny employees of the one bit of leverage they have, which is to leave a job and get a better one.

The argument made in favor of the stigma is that (a) companies put a lot of effort into training people, and (b) very few employees earn back their salary in the first year. Let me address both of these. For the first, about the huge amount of effort put into training new hires, that’s not true. It may have been the case in 1987, but not anymore. There might be an orientation lasting a week or two, or an assigned mentor who sometimes does a great job and is sometimes absentee, but the idea that companies still invest significant resources into new hires as a general policy is outdated. They don’t. Sink or swim is “the new normal”. The companies that are exceptions to this will sometimes lose a good person, but they don’t have the systemic retention problems that leave them wringing their hands about “job hoppers”. For the second, this is true in some cases and not in others, but I would generally blame this (at least in technology) on the employer. If someone who is basically competent and diligent spends 6 months at a company and doesn’t contribute something– perhaps a time-saving script, a new system, or a design insight– that is worth that person’s total employment costs (salary, plus taxes, plus overhead) then there is something wrong with the corporate environment. Perhaps he’s been loaded up with fourth quadrant work of minimal importance. People should be able to start making useful contributions right away, and if they can’t, then the company needs to improve the feedback cycle. That will make everyone happier.

The claimed corporate perspective of “job hoppers” is that they’re a money leak, because they cost more than they are worth in the early years. However, that’s not true. I’d call it an out-and-out lie. Plenty of companies can pay young programmers market salaries and turn a profit. In fact, companies doing low-end work (which may still be profitable) often fire older programmers to replace them with young ones. What this means is that the fiction of new hires being worth less than a market salary holds no water. Actually, employing decent programmers (young or old, novice or expert) at market compensation is an enormous position of profit. (I’d call it an “arbitrage”, but having done real arbitrage I prefer to avoid this colloquialism.)

The first reason why companies don’t like “job hoppers” is not that new hires are incapable of doing useful work, but that companies intentionally prevent new people from doing useful work. The “dues paying” period is evaluative. The people who fare poorly or make their dislike of the low-end work obvious are the failures who are either fired or, more likely, given the indication that they won’t graduate to better things, which compels them to leave– but on the company’s terms. The dues-paying period leaks at the top. In actuality, it always did. It just leaks in a different way now. In the past, the smartest people would become impatient and bored with the low-yield, evaluative nonsense, just as they do now, but be less able to change companies. They’d lose motivation, and start to underperform, leaving the employer feeling comfortable with the loss. (“Not a team player; we didn’t want him anyway.”) In the “job hopping” era, they leave before they have the motivational crash and there is something to be missed.

The second problem that companies have with “job hoppers” is that they keep the market fluid and, additionally, transmit information. Job hoppers are the ones who tell their friends at the old company that a new startup is paying 30% better salaries and runs open allocation. They not only grab external promotions for themselves when they “hop”, but they learn and disseminate industry information that transfers power to engineers.

I’ve recently learned first-hand about the fear that companies have of talent leaks. For a few months last winter, I worked at a startup with crappy management, but excellent engineers, and I left when I was asked to commit perjury against some of my colleagues. (No, this wasn’t Google. First, Google is not a startup. Second, Google is a great company with outdated and ineffective HR but well-intended upper management. This, on the other hand, was a company with evil management.) There’s a lot of rumor surrounding what happened and, honestly, the story was so bizarre that even I am not sure what really went on. I won the good faith of the engineers by exposing unethical management practices, and became somewhat of a folk hero. I introduced a number of their best engineers to recruiters and helped them get out of that awful place. Then I moved on, or so I thought. Toward the end of 2012, I discovered that their Head of Marketing was working to destroy my reputation (I don’t know if he succeeded, but I’ve seen the attempts) inside that company by generating a bunch of spammy Internet activity, and attempting to make it look like it was me doing it. He wanted to make damn sure I couldn’t continue the talent bleed, even though my only interaction was to introduce a few people (who already wanted to leave) to recruiters. These are the extents to which a crappy company will go to plug a talent hole (when those efforts would be better spent fixing the company).

Finally, “job hopping” is a slight to a manager’s ego. Bosses like to dump on their own terms. After a few experiences with the “It’s not you, it’s me” talk, after which the reports often go to better jobs than the boss’s, managers develop a general distaste for these “job hoppers”.

These are the real reasons why there is so much dislike for people who leave jobs. The “job hopper” isn’t stealing from the company. If a company can employ a highly talented technical person for 6 months and not profit from the person’s work, the company is stealing from itself.

All this said, I wouldn’t be a fan of someone who joined companies with the intention of “hopping”, but I think very few people intend to run their careers that way. I have the resume of a “job hopper”, but when I take a job, I have no idea whether I’ll be there for 8 months or 8 years. I’d prefer the 8 years, to be honest. I’m sick of having to change employers every year, but I’m not one to suffer stagnation either.

My observation has been that most “job hoppers” are people who learn rapidly, and become competent at their jobs quickly. In fact, they enter jobs with a pre-existing and often uncommon skill set. Most of the job hoppers I know would be profitable to hire as consultants at twice their salary. Because they’re smart, they learn fast and quickly outgrow the roles that their companies expect them to fill. They’re ready to move beyond the years-long dues-paying period at two months, but often can’t. Because they leave once they hit this political wall, they become extremely competent.

The idea that the “job hopper” is an archetype of Millennial “entitlement” is one I find ridiculous. Actually, we should blame this epidemic of job hopping on efficient education. How so? Fifty years ago, education was much more uniform and, for the smartest people, a lot slower than it is today. Honors courses were rare, and for gifted students to be given extra challenges was uncommon. This was true within as well as between schools. Ivy League mathematics majors would encounter calculus around the third year of college, and the subjects that are now undergraduate staples (real analysis, abstract algebra) were solidly graduate-level. There were a few high-profile exceptions who could start college at age 14 but, for most people, being smart didn’t result in a faster track. You progressed at the same pace as everyone else. This broke down for two reasons. First, smart people get bored on the pokey track, and in a world that’s increasingly full of distractions, that boredom becomes crippling. Second, the frontiers of disciplines like mathematics are now so far out and specialized that society can’t afford to have the smartest people dicking around at quarter-speed until graduate school.

So we now have a world with honors and AP courses, and with the best students taking real college courses by the time they’re in high school. College is even more open. A freshman who is intellectually qualified to take graduate-level courses can do it. That’s not seen as a sign of “entitlement”. It’s encouraged.

This is the opposite of the corporate system, which has failed to keep up with modernity. A high-potential hire who outgrows starter projects and low-yield, dues-paying grunt work after 3 months does not get to skip the typical 1-2 years of it just because she’s not learning anything from it. People who make that move, either explicitly by expressing their boredom, or simply by losing motivation and grinding to a halt, often end up getting fired. You don’t get to skip grades in the corporate world.

Unfortunately, companies can’t easily promote people fast, because there are political problems. Rapid promotion, even of a person whose skill and quick learning merit it, becomes a morale problem. Companies additionally have a stronger need to emphasize “the team” (as in, “team player”, as much as I hate that phrase) than schools. In school, cheating is well-defined and uncommon, so individualism works. At work, where the ethical rules are often undefined, group cohesion is often prioritized over individual morale, as individualism is viewed as dangerous. This makes rapid promotion of high-potential people such a political liability that most companies don’t even want to get involved. Job hoppers are the people who rely on external promotion because they often grow faster than it is politically feasible for a typical corporation to advance them.

For all that is said to the negative about job hoppers, I know few who intentionally wish to “hop”. Most people will stay with a job for 5 years, if they continue to grow at a reasonable pace. The reason they move around so much is that they rarely do. So is it worth it to hire job hoppers, given the flight risk associated with top talent? I would say, without hesitation, “yes”. Their average tenures “of record” are short, but they tend to be the high-power contributors who get a lot done in a short period of time. Also, given that one might become a long-term employee if treated well, delivering both top talent and longevity, I’d say there’s a serious call option here.

“Job hopping” shouldn’t be stigmatized because it’s the corporate system that’s broken. Most corporate denizens spend most of their time on low-yield make-work that isn’t important, but largely exists because of managerial problems or is evaluative in purpose. The smart people who figure out quickly that they’re wasting their time tend to want to move on to something better. Closed-allocation companies make this extremely difficult and as politically rickety as a promotion system, so often they decide to move on. Without the “job hopping” stigma, they’d be able to quit and leave when this happens, but that reputation risk encourages them, instead, to quit and stay. For companies, this is much worse.

 

Learning C, reducing fear.

I have a confession to make. At one point in my career, I was a mediocre programmer. I might say that I still am, only in the context of being a harsh grader. I developed a scale for software engineering for which I can only, in intellectual honesty, assign myself 1.8 points out of a possible 3.0. One of the signs of my mediocrity is that I haven’t a clue about many low-level programming details that, thirty years ago, people dealt with on a regular basis. I know what L1 and L2 cache are, but I haven’t built the skill set yet to make use of this knowledge.

I love high-level languages like Scala, Clojure, and Haskell. The abstractions they provide make programming more productive and fun than it is in a language like Java and C++, and the languages have a beauty that I appreciate as a designer and mathematician. Yet, there is still quite a place for C in this world. Last July, I wrote an essay, “Six Languages to Master“, in which I advised young programmers to learn the following languages:

  • Python, because one can get started quickly and Python is a good all-purpose language.
  • C, because there are large sections of computer science that are inaccessible if you don’t understand low-level details like memory management.
  • ML, to learn taste in a simple language often described as a “functional C” that also teaches how to use type systems to make powerful guarantees about programs.
  • Clojure, because learning about language (which is important if one wants to design good interfaces) is best done with a Lisp and because, for better for worse, the Java libraries are a part of our world.
  • Scala, because it’s badass if used by people with a deep understanding of type systems, functional programming, and the few (very rare) occasions where object-oriented programming is appropriate. (It can be, however, horrid if wielded by “Java-in-Scala” programmers.)
  • English (or the natural language of one’s environment) because if you can’t teach other people how to use the assets you create, you’re not doing a very good job.

Of these, C was my weakest at the time. It still is. Now, I’m taking some time to learn it. Why? There are two reasons for this.

  • Transferability. Scala’s great, but I have no idea if it will be around in 10 years. If the Java-in-Scala crowd adopts the language without upgrading its skills and the language becomes associated with Maven, XMHell, IDE culture, and commodity programmers, in the way that Java has, the result will be piles of terrible Scala code that will brand the language as “write-only” and damage its reputation for reasons that are not Scala’s fault. These sociological variables I cannot predict. I do, however, know that C will be in use in 10 years. I don’t mind learning new languages– it’s fun and I can do it quickly– but the upshot of C is that, if I know it, I will be able to make immediate technical contributions in almost any programming environment. I’m already fluent in about ten languages; might as well add C. 
  • Confidence. High-level languages are great, but if you develop the attitude that low-level languages are “unsafe”, ugly, and generally terrifying, then you’re hobbling yourself for no reason. C has its warts, and there are many applications where it’s not appropriate. It requires attention to details (array bounds, memory management) that high-level languages handle automatically. The issue is that, in engineering, anything can break down, and you may be required to solve problems in the depths of detail. Your beautiful Clojure program might have a performance problem in production because of an issue with the JVM. You might need to dig deep and figure it out. That doesn’t mean you shouldn’t use Clojure. However, if you’re scared of C, you can’t study the JVM internals or performance considerations, because a lot of the core concepts (e.g. memory allocation) become a “black box”. Nor will you be able to understand your operating system.

For me, personally, the confidence issue is the important one. In the functional programming community, we often develop an attitude that the imperative way of doing things is ugly, unsafe, wrong, and best left to “experts only” (which is ironic, because most of us are well into the top 5% of programmers, and more equipped to handle complexity than most; it’s this adeptness that makes us aware of our own limitations and prefer functional safeguards when possible). Or, I should not say that this is a prevailing attitude, so much as an artifact of communication. Fifty-year-old, brilliant functional programmers talk about how great it is to be liberated from evils like malloc and free. They’re right, for applications where high-level programming is appropriate. The context being missed is that they have already learned about memory management quite thoroughly, and now it’s an annoyance to them to keep having to do it. That’s why they love languages like Ocaml and Python. It’s not that low-level languages are dirty or unsafe or even “un-fun”, but that high-level languages are just much better suited to certain classes of problems.

Becoming the mentor

I’m going to make an aside that has nothing to do with C. What is the best predictor of whether someone will remain at a company for more than 3 years? Mentorship. Everyone wants “a Mentor” who will take care of his career by providing interesting work, freedom from politics, necessary introductions, and well-designed learning exercises instead of just-get-it-done grunt work. That’s what we see in the movies: the plucky 25-year-old is picked up by the “star” trader, journalist, or executive and, over 97 long minutes, his or her career is made. Often this relationship goes horribly wrong in film, as in Wall Street, wherein the mentor and protege end up in a nasty conflict. I won’t go so far as to call this entirely fictional, but it’s very rare. You can find mentors (plural) who will help you along as much as they can, and should always be looking for people interested in sharing knowledge and help, but you shouldn’t look for “The Mentor”. He doesn’t exist. People want to help those who are already self-mentoring. This is even more true in a world where few people stay at a job for more than 4 years.

I’ll turn 30 this year, and in Silicon Valley that would entitle me to a lawn and the right to tell people to get off of it, but I live in Manhattan so I’ll have to keep using the Internet as my virtual lawn. (Well, people just keep fucking being wrong. There are too many for one man to handle!) One of the most important lessons to learn is the importance of self-mentoring. Once you get out of school where people are paid to teach you stuff, people won’t help people who aren’t helping themselves. To a large degree, this means becoming the “Mentor” figure that one seeks. I think that’s what adulthood is. It’s when you realize that the age in which there were superior people at your beck and call to sort out your messes and tell you what to do is over. Children can be nasty to each other but there are always adults to make things right– to discipline those who break others’ toys, and replace what is broken. The terrifying thing about adulthood is the realization that there are no adults. This is a deep-seated need that the physical world won’t fill. There’s at least 10,000 recorded years of history that shows people gaining immense power by making “adults-over-adults” up, and using the purported existence of such creatures to arrogate political power, because most people are frankly terrified of the fact that, at least in the observable physical world and in this life, there is no such creature.

What could this have to do with C? Well, now I dive back into confessional mode. My longest job tenure (30 months!) was at a startup that seems to have disappeared after I left. I was working in Clojure, doing some beautiful technical work. This was in Clojure’s infancy, but the great thing about Lisps is that it’s easy to adapt the language to your needs. I wrote a multi-threaded debugger using dynamic binding (dangerous in production, but fine for debugging) that involved getting into the guts of Clojure, a test harness, an RPC client-server infrastructure, and a custom NoSQL graph-based database. The startup itself wasn’t well-managed, but the technical work itself was a lot of fun. Still, I remember a lot of conversations to the effect of, “When we get a real <X>”, where X might be “database guy” or “security expert” or “support team”. The attitude I allowed myself to fall into, when we were four people strong, was that a lot of the hard work would have to be done by someone more senior, someone better. We inaccurately believed that the scaling challenges would mandate this, when in fact, we didn’t scale at all because the startup didn’t launch.

Business idiots love real X’s. This is why startups frequently develop the social-climbing mentality (in the name of “scaling”) that makes internal promotion rare. The problem is that this “realness” is total fiction. People don’t graduate from Expert School and become experts. They build a knowledge base over time, often by going far outside of their comfort zones and trying things at which they might fail, and the only things that change are that the challenges get harder, or the failure rate goes down. As with the Mentor that many people wait for in vain, one doesn’t wait to “find a Real X” but becomes one. That’s the difference between a corporate developer and a real hacker. The former plays Minesweeper (or whatever Windows users do these days) and waits for an Expert to come from on high to fix his IDE when it breaks. The latter shows an actual interest in how computers really work, which requires diving into the netherworld of the command line interface.

That’s why I’m learning C. I’d prefer to spend much of my programming existence in high-level languages and not micromanaging details– although, this far, C has proven surprisingly fun– but I realize that these low-level concerns are extremely important and that if I want to understand things truly, I need a basic fluency in them. If you fear details, you don’t understand “the big picture”. The big picture is made up of details, after all. This is a way to keep the senescence of business FUD at bay– to not become That Executive who mandates hideous “best practices” Java, because Python and Scala are “too risky”.

Fear of databases? Of operating systems? Of “weird” languages like C and Assembly? Y’all fears get Zero Fucks from me.

A humorous note about creationism and snakes.

This isn’t one of my deeper posts. It’s just something I find amusing regarding a cultural symbol, especially in the context of Biblical creationism. One of the core stories of the Bible is the temptation of Eve by a serpent who brought her to disobey God. In other words, sin came into the world because of a snake. The Garden of Eden wasn’t perfect, because one animal was bad and woman was weak. This myth’s origins go back to Sumer, but that’s irrelevant to this observation. The question is: why a snake? Why was this animal, out of all of dangerous creatures out there, chosen as the symbol of sin?

Snakes are carnivores, but most of the charismatic megafauna, such as tigers, eagles, and wolves, are. Yet few of those seem to inspire the reflexive fear that snakes do. Many of these animals are more dangerous to us than snakes. Yet we view lions and hawks with awe, not disgust or dread.

The most likely answer is not what creationists would prefer: it’s evolution that leads us to view snakes in such a way. Most land mammals– even large ones, to whom most species of snake are harmless– seem to have some degree of fear of snakes, and humans are no exception. Most religions have a strong view of this animal– some positive and reverent, but many negative. Why? Hundreds of millions of years ago, when our mammalian ancestors were mostly rodent-like in size, snakes were their primary predators. A fear of swift, legless reptiles was an evolutionary advantage. Seeing one meant you were about to die.

We don’t have this fear of lions or tigers because such creatures aren’t that old. Large cats have only been with us for a few million years, during which time we were also large and predatory, so there’s a mutual respect between us. Snakes and mammals, on the other hand, go way back.

Related to this is the legend of the dragon. No one can prove this, obviously, but the concept of a dragon seems to have emerged out of our “collective unconscious” as mammals. We have to go back 65 million years to find creatures that were anything like dragons, but a large number of cultures have independently invented such a mythical creature: a cocktail of small mammalian terrors (reptiles, raptors, fire, venom) coming from a time when we were small and probably defenseless prey creatures.

The key to understanding long-standing myths and symbols such as Biblical creation turns out, with some irony in the fact, to be evolution. Serpents ended up in our creation myths, because after all this time, we haven’t gotten over what they did to us 100 million years ago.

IDE Culture vs. Unix philosophy

Even more of a hot topic than programming languages is the interactive development environment, or IDE. Personally, I’m not a huge fan of IDEs. As tools, standing alone, I have no problem with them. I’m a software libertarian: do whatever you want, as long as you don’t interfere with my work. However, here are some of the negatives that I’ve observed when IDEs become commonplace or required in a development environment:

  • the “four-wheel drive problem”. This refers to the fact that an unskilled off-road driver, with four-wheel drive, will still get stuck. The more dexterous vehicle will simply have him fail in a more inaccessible place. IDEs pay off when you have to maintain an otherwise unmanageable ball of other people’s terrible code. They make unusable code merely miserable. I don’t think there’s any controversy about this. The problem is that, by providing this power, then enable an activity of dubious value: continual development despite abysmal code quality, when improving or killing the bad code should be a code-red priority. IDEs can delay code-quality problems and defer macroscopic business effects, which is good for manageosaurs who like tight deadlines, but only makes the problem worse at the end stage. 
  • IDE-dependence. Coding practices that require developers to depend on a specific environment are unforgivable. This is true whether the environment is emacs, vi, or Eclipse. The problem with IDEs is that they’re more likely to push people toward doing things in a way that makes use of a different environment impossible. One pernicious example of this is in Java culture’s mutilation of the command-line way of doing things with singleton directories called “src” and “com”, but there are many that are deeper than that. Worse yet, IDEs enable the employment of programmers who don’t even know what build systems or even version control are. Those are things “some smart guy” worries about so the commodity programmer can crank out classes at his boss’s request.
  • spaghettification. I am a major supporter of the read-only IDE, preferably served over the web. I think that code navigation is necessary for anyone who needs to read code, whether it’s crappy corporate code or the best-in-class stuff we actually enjoy reading. When you see a name, you should be able to click on it and see where that name is defined. However, I’m pretty sure that, on balance, automated refactorings are a bad thing. Over time, the abstractions which can easily be “injected” into code using an IDE turn it into “everything is everywhere” spaghetti code. Without an IDE, the only way to do such work is to write a script to do it. There are two effects this has on the development process. One is that it takes time to make the change: maybe 30 minutes. That’s fine, because the conversation that should happen before a change that will affect everyone’s work should take longer than that. The second is that only adept programmers (who understand concepts like scripts and the command line) will be able to do it. That’s a good thing.
  • time spent keeping up the environment. Once a company decides on “One Environment” for development, usually an IDE with various in-house customizations, that IDE begins to accumulate plugins of varying quality. That environment usually has to be kept up, and that generates a lot of crappy work that nobody wants to do.

This is just a start on what’s wrong with IDE culture, but the core point is that it creates some bad code. So, I think I should make it clear that I don’t dislike IDEs. They’re tools that are sometimes useful. If you use an IDE but write good code, I have no problem with you. I can’t stand IDE culture, though, because I hate hate hate hate hate hate hate hate the bad code that it generates.

In my experience, software environments that rely heavily on IDEs tend to be those that produce terrible spaghetti code, “everything is everywhere” object-oriented messes, and other monstrosities that simply could not be written by a sole idiot. He had help. Automated refactorings that injected pointless abstractions? Despondency infarction frameworks? Despise patterns? Those are likely culprits.

In other news, I’m taking some time to learn C at a deeper level, because as I get more into machine learning, I’m realizing the importance of being able to reason about performance, which requires a full-stack knowledge of computing. Basic fluency in C, at a minimum, is requisite. I’m working through Zed Shaw’s Learn C the Hard Way, and he’s got some brilliant insights not only about C (on which I can’t evaluate whether his insights are brilliant) but about programming itself. In his preamble chapter, he makes a valid insight in his warning not to use an IDE for the learning process:

An IDE, or “Integrated Development Environment” will turn you stupid. They are the worst tools if you want to be a good programmer because they hide what’s going on from you, and your job is to know what’s going on. They are useful if you’re trying to get something done and the platform is designed around a particular IDE, but for learning to code C (and many other languages) they are pointless. […]
Sure, you can code pretty quickly, but you can only code in that one language on that one platform. This is why companies love selling them to you. They know you’re lazy, and since it only works on their platform they’ve got you locked in because you are lazy. The way you break the cycle is you suck it up and finally learn to code without an IDE. A plain editor, or a programmer’s editor like Vim or Emacs, makes you work with the code. It’s a little harder, but the end result is you can work with any code, on any computer, in any language, and you know what’s going on. (Emphasis mine.)

I disagree with him that IDEs will “turn you stupid”. Reliance on one prevents a programmer from ever turning smart, but I don’t see how such a tool would cause a degradation of a software engineer’s ability. Corporate coding (lots of maintenance work, low productivity, half the day lost to meetings, difficulty getting permission to do anything interesting, bad source code) does erode a person’s skills over time, but that can’t be blamed on the IDE itself. However, I think he makes a strong point. Most of the ardent IDE users are the one-language, one-environment commodity programmers who never improve, because they never learn what’s actually going on. Such people are terrible for software, and they should all either improve, or be fired.

The problem with IDEs is that each corporate development culture customizes the environment, to the point that the cushy, easy coding environment can’t be replicated at home. For someone like me, who doesn’t even like that type of environment, that’s no problem because I don’t need that shit in order to program. But someone steeped in cargo cult programming because he started in the wrong place is going to falsely assume that programming requires an IDE, having seen little else, and such novice programmers generally lack the skills necessary to set one up to look like the familiar corporate environment. Instead, he needs to start where every great programmer must learn some basic skills: at the command-line. Otherwise, you get a “programmer” who can’t program outside of a specific corporate context– in other words, a “5:01 developer” not by choice, but by a false understanding of what programming really is.

The worst thing about these superficially enriched corporate environments is their lack of documentation. With Unix and the command-line tools, there are man pages and how-to guides all over the Internet. This creates a culture of solving one’s own problems. Given enough time, you can answer your own questions. That’s where most of the growth happens: you don’t know how something works, you Google an error message, and you get a result. Most of the information coming back in indecipherable to a novice programmer, but with enough searching, the problem is solved, and a few things are learned, including answers to some questions that the novice didn’t yet have the insight (“unknown unknowns”) yet to ask. That knowledge isn’t built in a day, but it’s deep. That process doesn’t exist in an over-complex corporate environment, where the only way to move forward is to go and bug someone, and the time cost of any real learning process is at a level that most managers would consider unacceptable.

On this, I’ll crib from Zed Shaw yet again, in Chapter 3 of Learn C the Hard Way:

In the Extra Credit section of each exercise I may have you go find information on your own and figure things out. This is an important part of being a self-sufficient programmer. If you constantly run to ask someone a question before trying to figure it out first then you never learn to solve problems independently. This leads to you never building confidence in your skills and always needing someone else around to do your work. The way you break this habit is to force yourself to try to answer your own questions first, and to confirm that your answer is right. You do this by trying to break things, experimenting with your possible answer, and doing your own research. (Emphasis mine.)

What Zed is describing here is the learning process that never occurs in the corporate environment, and the lack of it is one of the main reasons why corporate software engineers never improve. In the corporate world, you never find out why the build system is set up in the way that it is. You just go bug the person responsible for it. “My shit depends on your shit, so fix your shit so I can run my shit and my boss doesn’t give me shit over my shit not working for shit.” Corporate development often has to be this way, because learning a typical company’s incoherent in-house systems doesn’t provide a general education. When you’re studying the guts of Linux, you’re learning how a best-in-class product was built. There’s real learning in mucking about in small details. For a typically mediocre corporate environment that was built by engineers trying to appease their managers, one day at a time, the quality of the pieces is often so shoddy that not much is learned in truly comprehending them. It’s just a waste of time to deeply learn such systems. Instead, it’s best to get in, answer your question, and get out. Bugging someone is the most efficient and best way to solve the problem.

It should be clear that what I’m railing against is the commodity developer phenomenon. I wrote about “Java Shop Politics” last April, which covers a similar topic. I’m proud of that essay, but I was wrong to single out Java as opposed to, e.g. C#, VB, or even C++. Actually, I think any company that calls itself an “<X> Shop” for any language X is missing the point. The real evil isn’t Java the language, as limited as it may be, but Big Software and the culture thereof. The true enemy is the commodity developer culture, empowered by the modern bastardization of “object-oriented programming” that looks nothing like Alan Kay’s original vision.

In well-run software companies, programs are build to solve problems, and once the problem is finished, it’s Done. The program might be adapted in the future, and may require maintenance, but that’s not an assumption. There aren’t discussions about how much “headcount” to dedicate to ongoing maintenance after completion, because that would make no sense. If people need to modify or fix the program, they’ll do it. Programs solve well-defined problems, and then their authors move on to other things– no God Programs that accumulate requirements, but simple programs designed to do one thing and do it well. The programmer-to-program relationship must be one-to-many. Programmers write programs that do well-defined, comprehensible things well. They solve problems. Then they move on. This is a great way to build software “as needed”, and the only problem with this style of development is that the importance of small programs is hard to micromanage, so managerial dinosaurs who want to track efforts and “headcount” don’t like it much, because they can never figure out who to scream at when things don’t go their way. It’s hard to commoditize programmers when their individual contributions can only be tracked by their direct clients, and when people can silently be doing work of high importance (such as making small improvements to the efficiencies of core algorithms that reduce server costs). The alternative is to invert the programmer-to-program relationship: make it many-to-one. Then you have multiple programmers (now a commodity) working on Giant Programs that Do Everything. This is a terrible way to build software, but it’s also the one historically favored by IDE culture, because the sheer work of setting up a corporate development environment is enough that it can’t be done too often, and this leads managers to desire Giant Projects and a uniformity (such as a one-language policy, see again why “<X> Shops” suck) that managers like but that often makes no sense.

The right way of doing things– one programmer works on many small, self-contained programs– is the core of the so-called “Unix philosophy“. Big Programs, by contrast, invariably have undocumented communication protocols and consistency requirements whose violation leads not only to bugs, but to pernicious misunderstandings that muddle the original conceptual integrity of the system, resulting in spaghetti code and “mudballs”. The antidote is for single programs themselves to be small, for large problems to be solved with systems that are given the respect (such as attention to fault tolerance) that, as such, they deserve.

Are there successful exceptions to the Unix philosophy? Yes, there are, but they’re rare. One notable example is the database, because these systems often have very strong requirements (transactions, performance,  concurrency, durability, fault-tolerance) that cannot be as easily solved with small programs and organic growth alone. Some degree of top-down orchestration is required if you’re going to have a viable database, because databases have a lot of requirements that aren’t typical business cruft, but are actually critically important. Postgres, probably the best SQL out there, is not a simple beast. Indeed, databases violate one of the core tenets of the Unix philosophy– store data in plain text– and they do so for good reasons (storage usage). Databases also mandate that people be able to use them without having to keep up with the evolution of such a system’s opaque and highly-optimized internal details, which makes the separation of implementation from interface (something that object-oriented programming got right) a necessary virtue. Database connections, like file handles, should be objects (where “object” means “something that can be used with incomplete knowledge of its internals”.) So databases, in some ways, violate the Unix philosophy, and yet are still used by staunch adherents. (We admit that we’re wrong sometimes.) I will also remark that it has taken decades for some extremely intelligent (and very well-compensated) people to get databases right. Big Projects win when no small project or loose federation thereof will do the job.

My personal belief is that almost every software manager thinks he’s overseeing one of the exceptions: a Big System that (like Postgres) will grow to such importance that people will just swallow the complexity and use the thing, because it’s something that will one day be more important than Postgres or the Linux kernel. In almost all cases, they are wrong. Corporate software is an elephant graveyard of such over-ambitious systems. Exceptions to the Unix philosophy are extremely rare. Your ambitious corporate system is almost certainly not one of them. Furthermore, if most of your developers– or even a solid quarter of them– are commodity developers who can’t code outside of an IDE, you haven’t a chance.

We should pay people not to subordinate

In the very long term, technological society will need to implement a basic income, as soon as full employment becomes untenable. Basic income (BI) is an income paid to all people, with no conditions. Alaska already has a small one, derived from its oil wealth. In the long term, however, full employment will be impossible due to the need for ongoing, intensive, and traditionally unpaid training.

Today, I’m not going to talk about basic income, because we’re probably a couple of decades before society absolutely needs one, and even farther away from one being implemented, because of the monumental political hurdles such an effort would encounter. Instead, I’m going to talk about right now– January 7, 2013– and something we need to do in order to maintain our capacity to innovate. I will address something that society ought to do in order to prevent a pointless and extreme destruction of human capital.

Peter Thiel has created a program (“20 Under 20″) that pays high-potential young people to skip college, but the entry-level grunt work most people spend the first few years of their careers on is, in my opinion, much more damaging, especially given its indefinite duration. (I don’t think undergraduate college is that damaging at all, but that’s another debate.) There is some busywork in college, and there are a few (but they’re very rare) incompetent professors, but more creativity is lost during the typical workplace’s years-long dues-paying period, which habituates people to subordination, than to any educational program. I do not intend to say that there aren’t problems with schools, but the institutions for which the schools prepare people are worse. At least grading in school is fair. A professor as corrupt and partial in grading as the typical corporate manager would be fired– and professors don’t get fired often.

In terms of expected value (that is, the average performance one would observe given an indefinite number of attempts) the market rewards creativity, which is insubordinate. However, when it comes to personal income, expectancy is completely meaningless, at least for us poors who need a month-to-month income to pay rent. Most people would rather have a guaranteed $100,000 per year than a 1-in-1000 shot (every year) at $500 million, with a 99.9% chance of no income, even though the latter deal has more expectancy in it. Risk-adjusted, people of average means are rewarded for taking stable jobs, which often require subordination.

Technically speaking, people are paid for work, not subordination, but the process that exists to evaluate the work is so corrupt and rife with abuse that it devolves into a game that requires subordination. For a thought experiment, consider what would happen to a typical officer worker who, without subversion or deception to hide her priorities, did the following:

  • worked on projects she considers most important, regardless of her manager’s priorities,
  • prioritized her long-term career growth over short-term assignments, and
  • expressed high-yield, creative ideas regardless of their political ramifications.

These activities are good for society, because she becomes better at her job, and obviously for her. They’re even good for her company. However, this course of action is likely to get her fired. Certainly, there’s enough risk of that to invalidate the major benefit of being an employee, which is stability.

So, in truth, society pays people to be subordinate, and that’s a real problem. In theory, capitalist society pays for valuable work, but the people trusted to evaluate the work inevitably become a source of corruption as they demand personal loyalty (which is rarely repaid in kind) rather than productivity itself. However, the long-term effect of subordination is to cause creative atrophy. To quote Paul Graham, in “You Weren’t Meant to Have a Boss“:

If you’re not allowed to implement new ideas, you stop having them. And vice versa: when you can do whatever you want, you have more ideas about what to do. So working for yourself makes your brain more powerful in the same way a low-restriction exhaust system makes an engine more powerful.

I would take this even farther. I believe that, after a certain age and constellation of conditions, creativity can be lost effectively forever. People who keep their creativity up don’t lose it– and lifelong creative people seem to peak in their 50s or later, which should kill the notion that it’s a property of the young only– but people who fall into the typical corporate slog develop a mindset and conditioning that render them irreversibly dismal. It only seems to take a few years for this to happen. Protecting one’s creativity practically demands insubordination, making it almost impossible to win the corporate ladder and remain creative. This should explain quite clearly the lack of decent leadership our society exhibits.

We should offset this by finding a way to reward people for not subordinating. To make it clear, I’m not saying we should pay people not to work. In fact, that’s a terrible idea. Instead, we should find a repeatable, robust, and eventually universal way to reward people who work in non-subordinate, creative ways, thereby rewarding the skills that our society actually needs, instead of the mindless subordination that complacent corporations have come to expect. By doing this, we can forestall the silent but catastrophic loss that is the wholesale destruction of human creative capital.

No, idiot. Discomfort Is Bad.

Most corporate organizations have failed to adapt to the convexity of creative and technological work, a result of which is that the difference between excellence and mediocrity is much more meaningful than that between mediocrity and zero. An excellent worker might produce 10 times as much value as a mediocre one, instead of 1.2 times as much, as was the case in the previous industrial era. Companies, trapped in concave-era thinking, still obsess over “underperformers” (through annual witch hunts designed to root out the “slackers”) while ignoring the much greater danger, which is the risk of having no excellence. That’s much more deadly. For example, try to build a team of 50th- to 75th-percentile software engineers to solve a hard problem, and the team will fail. You don’t have any slackers or useless people– all would be perfectly productive people, given decent leadership– but you also don’t have anyone with the capability to lead, or to make architectural decisions. You’re screwed.

The systematic search-and-destroy attitude that many companies take toward “underperformers” exists for a number of reasons, but one is to create pervasive discomfort. Performance evaluation is a subjective, noisy, information-impoverished process, which means that good employees can get screwed just by being unlucky. The idea behind these systems is to make sure that no one feels safe. One in 10 people gets put through the kangaroo court of a “performance improvement plan” (which exists to justify termination without severance) and fired if he doesn’t get the hint. Four in 10 get damaging, below-average reviews that damage the relationship with the boss, but make internal mobility next to impossible. Four more are tagged with the label of mediocrity, and, finally, one of those 10 gets a good review and a “performance-based” bonus… which is probably less than he feels he deserved, because he had to play mad politics to get it. Everyone’s unhappy, and no one is comfortable. That is, in fact, the point of such systems: to keep people in a state of discomfort.

The root idea here is that Comfort Is Bad. The idea is that if people feel comfortable at work, they’ll become complacent, but that if they’re intimidated just enough, they’ll become hard workers. In the short term, there’s some evidence that this sort of motivation works. People will stay at work for an additional two hours in order to avoid missing a deadline and having an annoying conversation the next day. In the long term, it fails. For example, open-plan offices, designed to use social discomfort to enhance productivity, actually reduce it by 66 percent. Hammer on someone’s adrenal system, and you get response for a short while. After a certain point, you get a state of exhaustion and “flatness of affect”. The person doesn’t care anymore.

What’s the reason for this? I think that the phenomenon of learned helplessness is at play. One short-term reliable way to get an animal such as a human to do something is to inflict discomfort, and to have the discomfort go away if the desired work is performed. This is known as negative reinforcement; the removal of unpleasant circumstances in exchange for desired behavior. An example of this known to all programmers is the dreaded impromptu status check: the pointless unscheduled meeting in which a manager drops in, unannounced, and asks for an update on work progress,usually  in the absence of an immediate need. Often, this isn’t malicious or intentionally annoying, but comes from a misunderstanding of how engineers work. Managers are used to email clients that can be checked 79 times per day with no degradation of performance, and tend to forget that humans are not this way. That said, the behavior is an extreme productivity-killer, as it costs about 90 minutes per status check. I’ve seen managers do this 2 to 4 times per day. The more shortfall in the schedule, the more grilling there is. The idea is to make the engineer work hard so there is progress to report and the manager goes away quickly. Get something done in the next 24 hours, or else. This might have that effect– for a few weeks. At some point, though, people realize that the discomfort won’t go away in the long term. In fact, it gets worse, because performing well leads to higher expectations, while a decline in productivity (or even a perceived decline) brings on more micromanagement. Then learned helplessness sets in, and the attitude of not giving a shit takes hold. This is why, in the long run, micromanagers can’t motivate shit to stink.

Software engineers are increasingly inured to environments of discomfort and distraction. One of the worst trends in the software industry is the tendency toward cramped, open-plan offices where an engineer might have less than 50 square feet of personal space. This is sometimes attributed to cost savings, but I don’t buy it. Even in Midtown Manhattan, office space only costs about $100 per square foot per year. That’s not cheap, but not expensive enough (for software engineers) to justify the productivity-killing effect of the open-plan office.

Discomfort is an especial issue for software engineers, because our job is to solve problems. That’s what we do: we solve other peoples’ problems, and we solve our own. Our job, in large part, is to become better at our job. If a task is menial, we don’t suffer through it, nor do we complain about it or attempt to delegate it to someone else. We automate it away. We’re constantly trying to improve our productivity. Cramped workspaces, managerial status checks, and corrupt project-allocation machinery (as opposed to open allocation) all exist to lower the worker’s social status and create discomfort or, as douchebags prefer to call it, “hunger”. This is an intended effect, and because it’s in place on purpose, it’s also defended by powerful people. When engineers learn this, they realize that they’re confronted with a situation they cannot improve. It becomes a morale issue.

Transient discomfort motivates people to do things. If it’s cold, one puts on a coat. When discomfort recurs without fail, it stops having this effect. At some point, a person’s motivation collapses. What use is it to act to reduce discomfort if the people in charge of the environment will simply recalibrate it to make it uncomfortable again? None. So what motivates people in the long term? See: What Programmers Want. People need a genuine sense of accomplishment that comes from doing something well. That’s the genuine, long-lasting motivation that keeps people working. Typically, the creative and technological accomplishments that revitalize a person and make long-term stamina possible will only occur in an environment of moderate comfort, in which ideas flow freely. I’m not saying that the office should become an opium den, and there are forms of comfort that are best left at home, but people need to feel secure and at ease with the environment– not like they’re in a warzone.

So why does the Discomfort Is Good regime live on? Much of it is just an antiquated managerial ideology that’s poorly suited to convex work. However, I think that another contributing factor is “manager time”. One might think, based on my writing, that I dislike managers. As individuals, many of them are fine. It’s what they have to do that I tend to dislike, but it’s not an enviable job. Managing has higher status but, in reality, is no more fun than being managed. Managers are swamped. With 15 reports, schedules full of meetings, and their own bosses to “manage up”, they are typically overburdened. Consequently, a manager can’t afford to dedicate more than about 1/20 of his working time to any one report. The result of this extreme concurrency (out of accord with how humans think) is that each worker is split into a storyline that only gets 5% of the manager’s time. So when a new hire, at 6 months, is asking for more interesting work or a quieter location, the manager’s perspective is that she “just got here”. Six months times 1/20 is 1.3 weeks. That’s manager time. This explains the insufferably slow progress most people experience in their corporate careers. Typical management expects 3 to 5 years of dues-paying (in manager time, the length of a college semester) before a person is “proven” enough to start asking for things. Most people, of course, aren’t willing to wait 5 years to get a decent working space or autonomy over the projects they take on.

A typical company, as it sees its job, is to create a Prevailing Discomfort so that a manager can play “Good Cop” and grant favors: projects with more career upside, work-from-home arrangements, and more productive working spaces. Immediate managers never fire people; the company does “after careful review” of performance (in a “peer review” system wherein, for junior people, only managerial assessments are given credence). “Company policy” takes the Bad Cop role. Ten percent of employees must be fired each year because “it’s company policy”. No employee can transfer in the first 18 months because of “company policy”. (“No, your manager didn’t directly fuck you over. We have a policy of fucking over the least fortunate 10% and your manager simply chose not to protect you.”) Removal of the discomfort is to be doled out (by managers) as a reward for high-quality work. However, for a manager to fight to get these favors for reports is exhausting, and managers understandably don’t want to do this for people “right away”. The result is that these favors are given out very slowly, and often taken back during “belt-tightening” episodes, which means that the promised liberation from these annoying discomforts never really comes.

One of the more amusing things about the Discomfort Is Good regime is that it actually encourages the sorts of behaviors it’s supposed to curtail. Mean-spirited performance review systems don’t improve low performers; they create them by turning the unlucky into an immobile untouchable class with an axe to grind, and open-plan offices allow the morale toxicity of disengaged employees to spread at a rapid rate. Actually, my experience has been that workplace slacking is more common in open-plan offices. Why? After six months in open-plan office environments, people learn the tricks that allow them to appear productive while focusing on things other than work. Because such environments are exhausting, these are necessary survival adaptations, especially for people who want to be productive before or after work. In a decent office environment, a person who needed a 20-minute “power nap” could take one. In the open-plan regime, the alternative is a two-hour “zone out” that’s not half as effective.

The Discomfort Is Good regime is as entrenched in many technology startups as in large corporations, because it emerges out of a prevailing, but wrong, attitude among the managerial caste (from which most VC-istan startup founders, on account of the need for certain connections, have come). One of the first things that douchebags learn in Douchebag School is to make their subordinates “hungry”. It’s disgustingly humorous to watch them work to inflict discomfort on others– it’s transparent what they are trying to do, if one knows the signs– and be repaid by the delivery of substandard work product. Corporate America, at least in its current incarnation, is clearly in decline. While it sometimes raises a chuckle to see decay, I thought I would relish this more as I watched it happen. I expected pyrotechnics and theatrical collapses, and that’s clearly not the way this system is going to go. This one won’t go out with an explosive bang, but with the high-pitched hum of irritation and discomfort.