An ethical crisis in technology

Something I’ve noticed over the past few years is how outright unethical people are becoming in the technology business. I can imagine the reply. “Bad ethics in business; you mean that’s news?” Sure, people have done bad things for money for as long as there has been money. I get that. The difference that I sense is that there doesn’t seem to be much shame in being unethical. People are becoming proud of it, and a lot of our industry’s perceived leaders are openly scummy, and that’s dangerous.

An example is Mark Pincus, who prides himself on having done sleazy things to establish his career, and who moved to deprive his employees of equity by threatening to fire them if they didn’t give it back. When he was called on this, rather that admit to minority shareholder oppression, he went on a tirade about not wanting to have a “Google chef”, referring to the first cook at Google who earned over $20 million. In his mind, blue-collar workers don’t deserve to get rich when they take risks.

This is bad for startups. Equity is the only think pre-funded startups have to attract talent. These types of shenanigans will create an environment where no one is willing to work for equity. That is often the externalized cost of unethical behavior. It doesn’t hurt only the “victim”, but it harms all the honest players out there who are less trusted.

I will state that what appears in the news is only the tip of the iceberg. Here’s some shit I’ve either seen, or been credibly informed of, in the past 24 months, most of which was never in the news: no-poach agreements, attempted blacklisting of whistleblowers, a rescinded job offer based on a rumor that suggested PTSD, abuse of process within large companies, extortion of ex-employees, gross breaches of contract, frivolous lawsuits, threats of frivolous lawsuits, price fixing among venture capitalists, bait-and-switch hiring tactics, retaliatory termination, and fraudulent, no-product startups designed to embezzle angel investors. That took me about 60 seconds; two minutes more and the list would be three times as long. None of this was in finance: all tech, with most of these pertaining to reputable companies. I’m not an insider. I’m no one special. If I’m seeing these behaviors, then a lot of people are, and if a lot of people are seeing them, it means that a lot of unethical things are happening in a sector of the economy (technology) known for good behavior and a progressive mindset.

This is just the first act. This is what it looks like when the economy is doing well, as in technology, it is. The wronged move on. Their jobs may end and their equity may be stolen, but they move on to better opportunities. Those who’ve entered criminal patterns in order to keep up with expectations can still break out of them, if they do so now, without spiraling straight down. We’re not seeing the lawsuits, the disclosures of misconduct, the bitter fights and the epic crimes yet. At some point, presumably in a worse economic environment than what we have now, that will come. When it does, the results will be terrifying, because the reputation of who we are, as an industry, and what we do is at stake.

People, in the U.S., have developed a reflexive dislike for “finance” and “Wall Street”. The financial industry has certainly earned much of its negative reputation, but finance isn’t innately bad (unless one believes capitalism to be evil, which I don’t). Most of finance is just boring. I would also caution us against believing that “technology”– this brave new world of venture capital and startups and 25-year-old billionaires– is incapable of developing such a negative reputation. A few bad actors will give us all a bad name.

In finance, most of the unethical behaviors that occur have been tried so many times that laws exist to discourage them. There are problems of lax enforcement, and too often there is frank regulatory corruption, but at least there is clarity on a few basic things. One example: you don’t front-run your customers, and you will go to jail if you do. In addition to legal pressure from without, finance has imposed regulations on itself, in part, to regain its reputation. Self-regulatory organizations like the New York Stock Exchange have fired people for life over the worst crimes.

The ethical failures in technology have a different, and more intimate, character than those in finance. Financial crimes usually cause the loss of money. That’s bad. Sometimes it’s catastrophic. What makes these crashes especially newsworthy is the sheer number of people they affect. Nearly everyone was affected by the late-2000s property bubble, for example. The recent spate of ethical lapses in technology are of a more focused nature. They don’t inflict their losses on thousands of people, but they damage careers. The most common example that I’ve seen would be bait-and-switch hiring, where a person is brought on board with the promise of one type of project and given another. There is no legal recourse in this case, and there are lots of other ethical lapses that have similar effects. These activities waste the time of highly talented people in fruitless relationships, and often on pointless work.

In technology, we haven’t figured out how to regulate ourselves, and we’re risking the reputation of our industry. Too much depends on us to allow this. With the aging population, the depletion of fossil fuels, and the exigent need to move toward an environmentally sustainable economy, we’re just too important to the world for us to take a dive.

One might argue, in response to that claim, that most of what comes out of VC-istan isn’t “real technology”, and I’d agree. Venture capitalists may love “semantic sheep-throwing coupon social network” build-to-flip startups, but those don’t have much social or scientific value. For that, most of the unethical activity I’ve seen comes from the “fake technology” companies, but not all of it. Either way, few people make this distinction, and regarding them making it, I wouldn’t take that chance.

Who has the authority to address this problem? In my opinion, it’s an issue of leadership, and the leaders in technology are those who fund it: the venture capitalists. I’m not going to assert that they’re unethical, because I don’t know enough about them or their industry to make such a claim. I do, however, think they encourage a lot of unethical behavior.

What causes the ethical compromise that occurs commonly in the financial industry? My opinion is that it’s proximity to money, especially unearned money. When working for clients with $250 million in net worth, often who inherited it, people begin to feel that they deserve to get rich as well. It’s human nature. The cooks feel entitled to some of the food. Some people in that industry just take that mentality too far and begin committing crimes. I don’t think the problem with finance is that it attracts scummy people. I think it tempts them to do scummy things.

The sociology of venture-funded startups is similar. The entire funding process, with its obscene duration that is measured in months, with terms like multiple liquidation preferences and participating preferred, and with the entrepreneur expected to pay VCs’ legal fees– I am not making that up– is based on the premise that MBA-toting venture capitalists are simply Better Than You. Venture capitalists, in no uncertain terms, outrank entrepreneurs, even though the jobs are entirely different and I would argue that the entrepreneur’s job is a hundred times harder. Among entrepreneurs, there are Those Who Have Completed An Exit, and there are the rest. It’s not good to be among “the rest”; people can dismiss you as having “no track record”, which is a polite way to call someone a born loser. Among that set are Founders (of funded startups) and “founder-track” employees– proteges invited into investor meetings so they might become “Founder material” in the future– within funded startups… and then theres’s everyone else, the fools who keep the damn thing going. It seems like a meritocracy, but it’s the same social climbing bullshit found in any other industry. The meritocratic part is derived from what does when one has resources, but to get the resources one usually needs a full-time devotion to social climbing. There are exceptions, and incubators are making this situation better, but there are not that many.

Venture capitalists may not all be unethical, but they’re not ethical leaders either. They establish this lack of leadership through onerous terms, malicious collusion, and the general attitude that the entrepreneur is a desperate huckster, not a real partner. The Better Than You attitude they cop is intended to make people feel hungry, to make them want to get to the point where they actually “deserve” the company of venture capitalists, but it actually makes them act desperate. Does this lead to unethical behavior? Sometimes yes, sometimes no. When not, it still produces ethical ruin in the form of inappropriate, hasty promotions, which lead to the same kinds of behavior in the long run.

In other words, this ethics problem is not just limited to “a few bad apples”. Culpability, in my mind, goes straight to the top.

HR’s broken: if Performance Improvement Plans don’t, what does?

I wrote a bit in a previous essay, on how and why companies fire people, about why “Performance Improvement Plans” (PIPs) don’t actually have the titular effect of improving performance. Their well-understood purpose is not that, but to create “documentation” before firing someone. Why do they exist? Because companies prefer them over severance payments. Severance isn’t a legal obligation, but it’s something companies do to eliminate risks associated with firing employees. Despite what is said about “at will” employment, termination law is so complex (as it needs to be) and with so many special cases that, except in ironclad cases, the employer is at some risk of either losing the case, or of winning but damaging its reputation in the process. Perhaps more importantly, because lawsuits are expensive and time-consuming but PR attacks are cheap, severance payments exist to prevent disparagement by the employee. (Warning: never explicitly threaten to disparage a company or reveal damaging information in severance negotiation. That’s illegal. Don’t threaten legal action either, because it will get the opponent’s attorneys summoned and they are better-skilled as negotiators than the people you’ll be dealing with before you do so. Best, for a start, is to list what the company has done wrong without suggesting your course of action, whether it be a lawsuit, disparagement, or a talent raid. If you want to disparage your ex-employer should the negotiation fall through, that’s fine. Threatening to do so in the context of financial negotiation is illegal. Don’t do it.) Predictably, companies would prefer not to cut severance checks for fired employees, and to mitigate the risk that they are pursued afterward. That’s where the PIP comes in. It’s “documentation” that the employee was fired for performance reasons, intended to make him think he has no recourse.

If an employee is fired for objective, performance-based reasons, then he has no legal claim. He couldn’t do the job, which means he’s eligible for unemployment insurance but not a judgment against the employer. This is relatively easy to prove if the work is objectively measurable, as in many blue-collar jobs. However, most jurisdictions also enable an employee to seek recourse if he can establish that a lower performer was retained. If Bob is fired for producing only 135 widgets per hour (compared to a requirement of 150) while Alan, the boss’s son, keeps his job while delivering 130, then Bob can contest the termination and win. But if Bob was the only person below that standard, he can’t. Also, if Bob can establish that his low performance was caused by bad behavior from others, such as his manager, or that he was unfairly evaluated, he has a claim. This defense rarely works in objective, physical labor, but can be played pretty easily in a white-collar context (where work performance is more sensitive to emotional distress) and, even if the employer wins the lawsuit, it comes off looking bad enough that companies would prefer to settle. It is, of course, impossible to objectively define productivity or performance for white-collar work, especially because people are invariably working on totally different projects. What this means is that an “airtight” performance case for a termination is pretty much impossible to create in a white-collar environment. This is what severance contracts, which usually entail the right to represent oneself as employed, a positive reference, and enough money to cover the expected duration of the job search, are for: to give the person what’s necessary to transition to the next job, and to leave the person feeling treated well by the company. PIPs are seen as a “cheaper” way to get rid of the employee. Instead of cutting a 3-month severance check, keep him around on make-work for a month and then cold-fire him.

I’m not a lawyer, but I don’t think that a PIP does much to reduce lawsuit risk, because wrongful PIPs are just as easy to initiate as wrongful terminations. Most PIPs contain so many factual inaccuracies that I wouldn’t be surprised to learn that they weakened the employer’s case. So why do they really exist? It’s not to improve performance, because by the time the employee and manager are documenting each other’s minor mistakes in event of a lawsuit, the relationship is long past over, nor is the real purpose to strengthen the employer’s case in court. The actual purpose of the PIP is to reduce the likelihood of ever going to court by making the target feel really, really shitty.

People tend to pursue injustices when they perceive a moral superiority between themselves and an opponent: the pursuer feels right, and the opponent was wrong. On an emotional level, the purpose of severance payments is to make people feel good about the company, so people look back on their experience and think, “It’s unfortunate that it didn’t work out, but they treated me well up to the end, it didn’t hurt my savings, and I got a better job”. They won’t react, because they don’t feel wronged. The purpose of the PIP is to go the other way and make the employee feel bad about himself. That’s all. Most PIPs are so badly drawn as to be useless in court, but if the employee is made to feel like a genuine loser, he might slink away in shame without raising a challenge– either in court or in the public opinion. Regarding the latter, the PIP makes it seem as if the company “has something” on the employee that could be used against him in the future. “Don’t ask for severance and we won’t show this PIP to anyone.” That, by the way, is extortion on the part of the company, but that’s a discussion for another time.

Another function PIPs provide is that they cover up the reasoning for a termination. Some percentage of terminations are either for objective performance or ethical reasons where it’s obvious that the person had to be fired. The employee has no legal case, and would embarrass himself even to bring the matter up. Those cases are uncommon in technology, where performance can be very context-sensitive but truly incompetent people are just rare. Some other percentage of terminations occur for discrimination, or for legally-protected retaliatory reasons. Those are also rare, noting that “retaliation” in the legal context has a specific, conservatively-interpreted meaning. The vast middle (between the “performance” and “retaliatory” extremes) are what we might call political terminations. There’s some disagreement about how people should be working or how priorities should be set, and there’s possibly a personality conflict, and a person either with power or with access to those in power decides to remove a “troublemaker”. In this middling 80 to 90 percent, it’s impossible to pick apart who’s in the wrong. The employee? The manager? The team? The HR department? Possibly all of them, possibly none of them, usually some of them. Sometimes these disagreements are remediable, and sometimes they get to a point where (right or wrong) the employee must be let go. A severance payment allows the company to do this in a way that leaves most parties (except the finance department, annoyed at paying 3 months’ salary to fired employees) satisfied. The alternative is the PIP, which involves pretending the problem is an objective performance issue, and hoping the employee will believe it.

A PIP is unfair to pretty much everyone– except the finance department, which can claim it “saved money” on severance payments. As I’ve said, PIPs are pretty much final: they destroy the relationship. The PIP’d employee has to come to work for a manager who, in his mind, has just fired him. The manager has to go through the motions of this kangaroo court, and his ass is on the line if he makes any mistake that increases the firm’s legal risk (which is not hard to do) so he resents the employee in a major way. The rest of the team has to put up with a disgruntled employee who is now pretty much useless, splitting his effort between the PIP make work and his job search. In short, someone in HR or finance gets to look good by “saving” a few thousand dollars while externalizing the costs to the target’s team.

A PIP is threatening to fire someone, and threats are almost always counterproductive on either side of a negotiation. By the time a PIP is even on the table, the employee should just be fired. Same day. Write a contract that gives him a severance check in agreement not to sue or disparage the company, and let everyone move the fuck on. No CYA “documentation”. You’ve made your decision to separate. Now execute it, but do it well and do it fairly.

I’m going to step away from all this nastiness for a bit, because the vast majority of employees aren’t intentional low-performers, most managers aren’t jerks, and I’d like to believe that most companies aren’t driven by bean counters in HR suites. Let’s take a positive spin: what should a manager do if he genuinely wants to improve an employee’s performance, behavior, or impact? Although formal PIPs are toxic, the continual process of improving performance is one in which manager and employee should always have an interest, whether that employee is a 1st- or 7th- or 10th-decile performer. Who doesn’t want to become better at his job? What managers don’t want their teams to be better? Performance improvement is actually something people should be doing at all times, not just in times of crisis.

First, it’s important to get terminology right. Many technical organizations like to be “lean” or “flat”, which means that a manager has 10 to 50 reports instead of the traditional 3 to 5. If a manager has more than five reports, he can’t possibly evaluate them for performance. There isn’t a known performance problem. There’s a known impact problem. It might be, and might not be, a problem of individual performance. If it’s not, and the problem is presented as a “performance” issue, the employee is going to hate the manager’s guts for (a) not seeing what is really going on, and (b) throwing him under the bus before understanding the issue.

Managers are typically afraid to investigate the real causes of impact problems. There are two reasons for this. The first is that the cause is almost always unpleasant, once discovered. The employee might simply be unable to do the job, which means he must be fired. Not pleasant. The cause of the problem might be something like a health problem. Even more unpleasant, and it legally complicates any termination process. Most often, the cause of the problem– the “blocker”– is either a mistake made by the manager or a problem caused by someone else on the team, usually a person of high influence whom the manager likes– a “young wolf”. That’s extremely unpleasant, because it requires the manager either to admit his own mistake, or to do a lot of work to rectify the problem. For this reason, managers typically don’t want to peer under this rock until their bosses, or the HR department, force their hand. Most managers aren’t malevolent, but they’re just as lazy as anyone else involved in the grueling, unpleasant work based largely on addressing and fixing failed relationships, and they’d rather the problem just go away.

The second reason why managers rarely investigate low-impact employees is the convexity of the impact curve; in a typical organization, a 10th-decile employee might be a “10x” performer– that is, as valuable as ten average employees– while 8th decile is 3x, 6th is 1.5x, 4th is 0.5x, and 2nd is 0.25x. A manager gains a lot more by encouraging a 7th or 8th-decile performer to move up one decile than by bringing someone from the bottom into the 3rd- or 4th-decile. Of course, it’s possible that uncovering the “blocker” might move someone from the bottom into the upper-middle or even the top, but people are naturally distrustful of sudden moves. Even if removing the blocker puts the employee in the 9th or 10th-decile for personal performance, he’s extremely unlikely to do better than even the 4th-decile for impact, because his sudden change will be distrusted by the rest of the organization. Managers can’t easily mentor or sponsor people in this position either, since the rest of the team will see it as “rewarding failure” for the low-impact employee to receive disproportionate managerial attention or support. Right or wrong, most managers aren’t willing to risk their credibility in order to move someone from low impact to high.

So what should a manager do if he genuinely wants to improve an employee’s impact or performance? Let’s first address what not to do, aside from what has already been covered. First, written communication about any performance or impact issue is an absolute no-no. It’s an aggressive move, and will be taken as such. Sure, it reduces the employer’s leverage in severance negotiation. Who cares? That’s good for the finance department, but bad for the relationship between the employee and his manager, his team, and the company, and those relationships are what actually needs repair. If this is improvement process is done properly, then the severance conversation might never happen, which is what everyone should be aiming for. HR wants to cut people loose and to do so cheaply, but the manager should, at this point, still be trying not to have that happen in the first place.

Second, managers should never disparage their employees, and should defend them to others. Any concerns should be addressed one-on-one and verbally. Managers tend (surprisingly) to be insecure, because they can steer the team but don’t drive it, and because they need high credibility with their team in order to be effective. This is precarious and leaves them with less power than one might expect. On the other hand, most managers have this credibility. They should use it to keep the employee’s reputation intact so that, if they do successfully intervene with the troubled employee and bring his performance up to par, his impact can also rise to that level.

There are two kinds of low-impact employees. The first are those whose approach is ineffective, and the second are those who aren’t managing themselves properly (i.e. aren’t getting any work done). For the first, the best words to use are “I wish you were doing this differently.” It’s firm, but objective. The manager isn’t saying, “You’re a fuckup”, which is going to lead to “No, I’m not”. He’s saying, “I understand what is effective and what is not in this organization, and you’d be more effective if you did this”. Since he’s the manager, that statement should (and almost always will) be taken seriously. The second case is harder, because it’s impossible to state the problem without offending the employee. It’s hard to uncover the cause of a motivational crisis when the Rules of Work require the employee pretend there isn’t one. This requires a “Tell me what you need” discussion. It feels difficult, because it seems like there’s a power inversion. There isn’t. There’s mutuality. The employee’s job is to figure out what he needs from the manager in order to succeed at the company, to deliver if these requests are honored, and to consider finding another job (internally or externally) if they can’t be met. Unlike PIPs and ceremony, it actually works, but it’s not a rapid process. HR wants to turn “low performers” into mid-grade meat or to ship them out within half a quarter. A “Tell me what you need” discussion occurs over months. Why? Because the real causes of a low-impact employee are usually too complex to be remedied in 30 or even 90 days. For example, if the cause is a bad technical decision made from above that damages his ability to have an impact, it requires shifting the employee to a place where he’s less affected by it or can shine in spite of it. If it’s bad behavior from someone else on the team, I have to paraphrase Linus Torvalds: you now have two managerial problems. It’s fucking hard, but it’s what managers are supposed to do. It’s their job.

The goal of these discussions shouldn’t be to “improve performance” in some abstract, meaningless way. Turning an ineffective employee into a 3rd-decile nobody is wasted effort. You’ve turned someone you were about to fire into someone just effective enough to be hard to fire (without pissing others off). It’d make more sense to release him and hire someone new. So that goal makes no sense. The goal should be a process of discovery. Can this person become a major asset to this organization? If no, terminate employment, even if you really like him. Be nice about it, write a severance check, but fire and move on. If yes, proceed. How do we get there? What are the obstacles? If his reputation on his team is lost, consider a transfer. If he discovers he’d be more motivated doing a different kind of work, get him there.

This said, there are two “big picture” changes that are required to make managerial environments more stable and less prone to inadvertent toxicity and unexpected surprises. The first is that managers need to be given proper incentives. Rather than being rewarded for what they do for the company, managers are typically rewarded or punished based on the performance of their team, and their team alone. What this means is that managers have no incentive to allow outgoing transfers, which are good for the employee and the company but can be costly, in the short term, for the team. With these perverse incentives, it seems better for the manager to hit a high-potential, 3rd-decile performer with an intimidation PIP and capture the short-term “fear factor” bump (into the 4th- or 5th-decile) than it would be to let him find a role where he might hit the 8th decile or higher. Managers should receive bonuses based on the performance of outgoing transfers over the next 12 months, and these bonuses should be substantial, in order to offset the risk that managers take on when they allow reports to transfer.

The second problem is with the “lean” organizational model where a manager has 10 to 50 reports. It’s not that hierarchy is a good thing. It’s not, and a company that doesn’t have a large degree of extra-hierarchical collaboration (another process that most companies fail to reward) is doomed to failure. The problem is that conceptual hierarchy is a cognitive necessity, and a company that is going to attempt to assess individual performance must have processes that allow sane decision-making, which usually requires an organizational hierarchy. A manager with 25 reports can see who the high- and low-impact people are, but rarely has the time to assess causes on an individual basis. He has to delegate assessment of individual performance to people he trusts– his lieutenants who are usually, for lack of better terminology, brown-nosing shitheads. This is a classic “young wolves” problem.  These lieutenants rarely act in the interest of the manager or organization; on the contrary, they’ll often work aggressively to limit the impact of high-potential employees who might, in the future, become their competition. This is what makes “too nice” management fail, and it’s like the problem with right-libertarianism: a limited, hands-off government, managed poorly, allows an unchecked and more vicious corporate power to fill the vacuum. “Flat” organizations encourage unofficial hierarchies in which young wolves thrive. It’s better to have more hierarchy and get it right than to let thugs take over.

Another major problem is that the managerial role is overloaded. The manager is expected to be a mentor, a team-builder, and a boss, and those roles conflict. It would be hard to balance these obligations over a small number of reports, but with a large number, it’s impossible. Paradoxically, managers also have too much power and too little. They can make it impossible for a report to transfer, or destroy his reputation (since they always have more credibility than their reports) so they have unlimited power to make their reports’ work lives hell– in technology, this pattern is called “manager-as-SPOF”, where SPOF means “single point of failure”, meaning a potentially catastrophic weak point in a system– but they almost never have the power they actually need to get their jobs done. One example is in performance reviews. Managers are completely fucked when it comes to performance reviews over low-impact employees. Writing a negative one is just as bad as a PIP, and makes it incredibly difficult for that employee to transfer or advance, even several years after the review. Writing a good review for a low-impact employee sends the wrong message, which is that his low performance is the norm, or that the manager is inattentive. Since most employees don’t like being low-impact and would rather have management take attention toward resolving their blockers, this also breeds resentment, in the same way that incompetent teachers who compensate using grade inflation are disliked. Another example is in termination. Managers would rather terminate employees with a few months’ severance than go through the rigamarole of a PIP. It saves them the extra work and their teams the morale costs, and it has the same conclusion (the employee leaves) at, overall, less cost to the company. No sir, says HR: gotta write that PIP.

I haven’t done this matter justice, and I’ll probably have to come back to it at a later time, but I hope I’ve established not only the mechanisms by which managers might actually be able to improve the impact of their reports, but also the organizational problems that make it inevitable that there will be low-impact people, since the ultimate goal is not to “improve low performance” after there’s a problem, but to prevent low impact from happening in the first place.

Creativity and the nautilus

Creativity is one of those undeniably positive words that is reflexively associated with virtue, leadership, intelligence and industry (even though I’ve known some creative people who lack those traits). Yet, 28 years of experience has taught me that people don’t have much patience for creative people. We are like the nautilus, a strange-looking and, some would say, unattractive creature that leaves behind a beautiful shell. Most people are happy to have what we create, but would prefer never to see the marine animal that once lived inside the art. All they want to do is pick it up once left on the shore.

Creativity isn’t in-born and immutable. It’s an attribute of which most people end up using (and, in the long-term, retaining) less than 1 percent of their natural potential. It grows over time as a result of behaviors that most people find strange and potentially unpleasant. Creative people are curious, which means that they often seek information that others consider it inappropriate for them to have. They’re passionate, which means they have strong opinions and will voice them. Neither of these is well-received in the typical conservative corporate environment. Worse yet, creative people are deeply anti-authoritarian, because it’s simply not possible to become and stay creative by just following orders, not when those orders impose compromise over a substantial fraction of one’s waking time. It never has been, and it never will be.

This doesn’t mean that creativity is only about free expression and flouting convention. Creativity has divergent and convergent components. An apprentice painter’s more abstract work may seem like “paint splatter”, but there’s a divergent value in this: she’s getting a sense of what randomness and play “look like”. She might do that in January. In February, she might do a still life, or an imitation of an existing classical piece. (Despite negative connotations of the word, imitation was an essential part of an artist’s education for hundreds of years.) It’s in March that she produces art of the highest value: something original, new, and playful (divergence) that leverages truth and value discovered by her predecessors, trimmed using rules (convergence) they teach. Computationally, one can think of the divergent component as free expansion and the convergent aspect as pruning. The convergent aspect of creativity requires a lot of discipline and work. Novelists refer to this process as “killing your darlings”, because it involves removing from a narrative all of the characters or themes that an artist inserts for personal (often emotional) reasons but that add little merit to the completed work. For technology, Steve Jobs summarized it in three words: “Real artists ship”. It’s intellectually taxing, and many people can’t do it.

Convergent creativity is what our culture’s archetype of “the artist”, for the most part, misses. Youthful “experience chasing”, social climbing, extremes either of fame and wealth or of miserable, impoverished obscurity, all have some divergent value. On the other hand, that type of stock-character artist (or real Williamsburg hipster) has almost no hope of producing anything of artistic value. Divergence alone leads to mutation and insanity, not creation. This also explains why the highest-priced “modern art” is so terrible. We have a culture that worships power, money, and social access for their own sake and so the “brand-name artists” whose connections enable them to be paid for divergence (and divergence only) are treated as high priests by our supposed cultural leaders. The result of this is that terrible art sells for astronomical sums, while real artists often work in obscurity.

Most people, when they think of creative workers, whether we’re talking about writers or game designers or computer programmers, only seem to understand the divergent part. That gives them the impression that we’re a coddled bunch with easy jobs. We “get paid to have fun”. To quote Mad Men from the perspective of an “account man”, we’re “creative crybabies”. Bullshit. Creativity is rewarding and it can be a lot of fun, but it also requires an incredible amount of work, often at inconvenient hours and requiring very high levels of sustained effort. Wake up at 5:00 am with a great idea? Get to work, now. If that idea comes at 10:30 pm instead, stay up. Most of us work a thousand times harder than “executives”, the private-sector bureaucrats whose real jobs are white-collar social climbing.

As with the adiabatic cooling of an expanding gas, and the heating of a compressed one, divergence has a cooling effect while convergence is hot. For the first, free writing tends to calm the nerves and diminish anger. Improvisational art is deeply anxiolytic and invigorating. But if it is taken too far without its counterpart, divergence leads to softness and ennui. A metaphor that programmers will understand is the concept of writing a 100,000-line program on paper, or in a featureless text editor (e.g. NotePad) without an appropriate compiler or interpreter. In the absence of the exacting, brutal convergence imposed by the compiler (which rejects programs that don’t make sense) the programmer is deprived of the rapid and engaging divergence/convergence cycle on which programmers depend. Over time, the hand-written program will accumulate shortcuts and errors, as the programmer loses touch with the substrate on which the program will run. Related to divergence’s cooling effect is convergence’s heating effect. It’s very taxing to hack branches off of one’s “search tree”. It’s painful. As humans, we’re collectors and we don’t like setting things we’ve found out of view, much less discarding them. About two to three hours of this work, per day, is more than most people can reliably perform. Although extremely important, the convergent aspect of creativity is exhausting and leads to frayed nerves and difficult personalities.

Business executives’ jobs are social climbing, internal and external. Their jobs are intentionally made extremely easy to minimize the probability of social hiccups, for which tolerance is often zero. They’re paid five times as much as any person could possibly need, to eliminate the possibility of financial worry and allow them to purchase ample domestic services. They’re given personal assistants to remove pollution and stress from their communication channels. This makes sense in the context of what companies need from their executives. He only needs to have average intelligence and social skill, but he needs to sustain this level reliably, 24/7, under a wide range of unpredictable circumstances. Rare moments (“outlier events”) are where creative people earn their keep, but where executives set themselves on fire. To be an executive, one needs to be reliable. Executives may have the authority to set working hours, but they have to be on time, all the time, every day.

For a contrast, the creative person often “needs a walk” at 3:00 in the afternoon and has no qualms about leaving work for an hour to enjoy an autumn day. After all, he does his best work at 7:00 am (or 11:30 pm) anyway. Creative people earn their keep in their best hours, and are wasting time in the mediocre ones. From an executive perspective, however, this behavior is undesirable. Why’d he up-and-leave at 3:00? There could have been a crisis! What risk! By definition, creative people are full of risk, socially gauche on account of their tendency to mentally exhaust themselves in their desire to produce something great, and therefore not reliable. Clearly “not executive material”, right? I disagree, because I think the discrepancy is more of role than of constitution. Most creative people could become tight-laced executives if they needed to do so. Creatively inclined entrepreneurs often do find themselves in this role, and many play it well. There is, however, an either/or dynamic going on. It’s necessary to choose one role and play it well, rather than attempting both and being very likely to fail. Someone who is exhausted on a daily basis by the demands of high-level creativity (and I don’t mean to say that creative work isn’t rewarding; it is, but it’s hard) can’t be a reliable people-pleaser in the same week, any more than a clay jar that just came out of a 1000-degree oven shouldn’t be used, until it cools, to store ice cream.

For this reason, business executives tend to look at their most creative underlings and see them as “not executive material”. Well, I agree this far: not now. That perception alone wouldn’t be such a problem, except for the fact that people tend to judge others’ merit on a “just like me” basis, especially as they acquire power (ego confirmation). They tend to be blind to the possibility that people very different from them can be just as valuable. Executives tend, for example, to favor mediocre programmers who don’t exert themselves at work, and who therefore retain enough mental energy for social polish. The technical mediocrity that sets in over most software companies is the long-term result of this. It’s not that companies go out of their way to “promote idiots”; it’s that they are designed to favor reliable career politicians over “risky” hard workers whose average performance is much higher, but whose lows are lower.

Would a creative person make a good business executive? I believe so, but with a specific variety of mentoring. Creative people given management positions generally find themselves still wanting to do the non-managerial work they did before the promotion. The problem here is that the manager can easily end up hogging the “fun work” and giving the slag to his subordinates. This will make him despised by those below him, and his team will be ineffective. That’s the major obstacle. Creative people tapped for management need to be given a specific ultimatum: you can go back to [software programming | game design | writing] full-time in 6 months and keep the pay increase and job title, but for now your only job is leadership and the success of your team. Good managers often find themselves taking on the least desired work in order to keep the underlings engaged in their work. It’s unpleasant, but leadership isn’t always fun.

I’ll go further and say that creative discipline is good practice for business leadership. Business leaders can’t afford to be creative at the fringe like artists or computer programmers, because the always-on, reliable social acumen they require precludes this kind of intellectual exhaustion, but the processes they need to manage are like an externalized version of the self-management that creative people need to develop in order to succeed. They must to encourage their subordinates to grow and test new ideas (divergence) but they also need to direct efforts toward useful deliverables (convergence). Many managers who fail do so because they only understand one half of this cycle. Those who focus only on convergence become the micromanagers who complain that their subordinates “can never do anything right” and who rule by intimidation. They fail because the only people who stay with them are those who actually can’t do anything right. The too-nice bosses who focus solely on divergence will either (a) fail to deliver, and be moved aside, or (b) are unprepared when young-wolf conflicts tear apart their organization. So, although the creative personality appears unsuited for managerial roles in large organizations, I’d argue that creative experience and discipline are highly valuable, so long as the manager is prepared to temporarily sacrifice the creative lifestyle hor professional purposes.

The results of creativity are highly valued, although sometimes only after an exhausting campaign of persuasion. The creative process is deeply misunderstood, creative roles are targets of misplaced envy because of their perceived “easiness”, and the people who have the most creativity (the top 0.1%) are rarely well-liked. People want our ideas, but not us and our “attitudes”. The shells, not the gnarly creatures. I invoke Paul Graham’s explanation for why “nerds” are rarely popular in high school. His explanation is that most high schoolers make a full-time job of being popular, while the nerds are too busy learning how to do hard things, such as programming computers or playing musical instruments. We see the same thing (the high-school social dynamic) in the slow-motion cultural and moral terrorist attack that we call “Corporate America”. People can invest themselves into working really hard in pursuit of the chance (and it’s far from a certainty) of producing something truly great. Or they can throw themselves wholesale into office politics and grab the gold. The world is run by those who chose the latter, and the cultural, artistic and moral impoverishment of our society is a direct result of that.

Breaking Men: bookends of the American Era.

WARNING: spoilers (Mad Men, Breaking Bad) ahead.

AMC has developed, over the past five years, two television shows of extremely high quality that seem unrelated: Mad Men and Breaking Bad. The first centers on Madison Avenue in the 1960s, and it depicts the ascendancy of the “WASP” upper-middle-class into a makeshift upper class, following the power vacuum created by the evisceration of the old elite in the 1930s and ’40s, while they are literally chased up those stairs by social upheaval and their own obsolescence. The lions of Mad Men become rich, aggressive businesspeople because they are getting old and there is nothing else for them to do. Although it can be cynical and morose, the show is eros-driven, focused on an elite sector of the economy responsible for literally building the American middle-class culture, with still-new and exciting products like air travel, automobiles, and (well…) mass-produced, cheap tobacco cigarettes. What pervades the show is that these people should be happy: they’re rich and important people in the most powerful country on earth in what seems like the most exciting time in history, with passenger air travel a reality and space tourism “inevitable” within 25 years. (About that…)

An interesting exercise regarding Mad Men is to think of its environment and its characters within a modern context. Most young professionals (outside of technology, where their jobs hardly existed that long ago) agree that the “golden age” was decades ago, when hours were short and, though it was harder to get in the door of top firms, career advancement once in them was virtually guaranteed (for a man, at least). Peggy and Don, who are frequently in the office at 7 o’clock, seem remarkably dedicated in comparison to their drunken colleagues. Whether this depiction of young professional life in the 1960s as a “golden age” is accurate is a matter of much controversy, and Mad Men takes both sides. On one hand, Pete Campbell, under 30 and not especially well-compensated at the time, is able to buy (!) a house in Manhattan for $32,000 ($230,000 in today’s dollars). On the other, the work environment is full of fear and authority (Pete is almost fired for pitching his own idea to a client) and social anxieties are so severe that self-medication with alcohol and cigarettes is a necessity. A bit of useful context is that it wasn’t at all normal or socially acceptable for professionals to leave jobs, as it is now, except for obvious promotions like first-day partnership in the new firm. So getting fired or demoted unto departure wasn’t the annoyingly unexpected, unpaid, 6-week vacation it is today: it could spell long-term ruin (cf. Duck Phillips). Every character in Mad Men lives on the brink of career disaster, and they know it.

If there is a more present comparison for the advertising world of the 1960s, it would be investment banking between 1980 and 2008: a lucrative, selective, cutthroat, and often morally compromised industry whose cachet entitled it to the middle-third of Ivy League students– people who were clearly capable, but not so capable as to be walking authority conflicts. On the same token, advertising was far more creative an industry (at least at junior levels, and within the confines of the law) than investment banking could ever be. Also, in the 1960s Silicon Valley wasn’t on the map, and air travel was still prohibitively expensive for many people, so the creative and entrepreneurial East Coasters who might be drawn to California in 2009 would have, instead, been drawn into advertising in New York. This created a very different mix than the cultural homogeneity of modern investment banking. In 2012, these people wouldn’t have been anywhere close to each other. Pushing them forward fifty years, Peggy would be an elite graphic artist with some serious coding chops, but living on a very moderate income and an hour away from Manhattan by train. Pete would be a pudgy private equity associate, far sleazier and less charming than his 1960s counterpart. Roger Stirling probably would have gone into law and, despite being utterly mediocre, attended Harvard Law, entered a white-shoe (“biglaw”) firm, and made partner based on his family connections. Harry Crane would be in Los Angeles, but wise enough to the times not to enter the entertainment industry proper; he’d be third co-founder (5% equity holding) of a “disruptive” internet startup about to become a thorn in MTV’s side. These four would have nothing to do with each other and probably never meet at all. Joan? It’s hard to say where she’d be or what she’d be doing, but she’d be good at it. Don is even harder to place, being a zeitgeist in the true sense of the word. His character wouldn’t exist if shifted 10 years in either direction. He’s a pure product of the Great Depression, the Korean War, and the 1950s-60s ascendancy of the U.S. middle class. Dick Whitman would have taken an entirely different path. My assessment of how he would rise (into venture capital, rather than advertising, with the first half of his career established in the developing world in the 1990s) I will delay to another post, needing in this one to dedicate time to a seemingly opposite show: Breaking Bad. 

Mad Men opens the American Era on the coast where the sun rises, and Breaking Bad closes it in the rocky, red desert where the sun almost sets. Walter White is, in many ways, the opposite of Don Draper: fifteen years older at the story’s onset, but three decades older in spirit, a failed chemist who squandered his genius, a schoolteacher eventually fired for misconduct, and a man who turns to vicious crime out of a deep hatred (rather than a manipulative and cynical dismissal, like Draper harbors) of society. Draper is might be an allusion to the cloak he’s covered over himself; White begins the show naked (almost literally). He’s a feeble, desperate man who has wasted his genius (for reasons left unclear, but that seem connected to his massive ego and a callously discarded relationship) to become a mediocre teacher in a mediocre school who works at a car wash on his weekends to support a family. In the first episode, he gets sick with a cancer that his shitty health insurance won’t cover. Out of financial desperation, he teams up with one of his failed students to cook methamphetamine.

Draper seems to glide into advertising, almost by accident. Dick Whitman didn’t assume Draper’s identity because he wanted to become an ad magnate; he did it to escape a war when he found the effort to be pointless. He rose with the rising tide of the American middle class, and was wily enough to come up a bit faster than the rest. On the other hand, Walter White’s simultaneous ascendancy (into the drug underworld) and free-fall (into moral depravity) occur by brute force, although only some of the force is his. The world is collapsing, and he becomes somewhat weightless by falling with it, but in full awareness of the thud at the bottom. That Walter will be dead or ruined and humiliated by the end of the fifth and final season seems obvious at this point; the open question is how far down he will go (morally speaking) and whether he will gain a tragic recognition of the misery he has inflicted upon his family and the world.

The catalyst for Walter White’s turn to crime is a diagnosis of lung cancer, giving him about a year to live. It may be a stretch to connect this with Mad Men, but one of their primary products is Lucky Strike cigarettes, featured prominently in the first episode (“Smoke Gets In Your Eyes”) and throughout the season. Mad Men features an optimistic, future-loving backdrop of industrial ascent and capitalistic triumph. Breaking Bad‘s backdrop is of industrial waste, wreckage, pollution, and toxicity. Most obvious is Walter’s product, which is an artistically pure form of one of humanity’s most poisonous products– crystal meth, nicknamed “Blue Sky” because of its extremely high quality and blue color. Drug kingpin Gus Fring hides behind a fried-foods chain arguably responsible (in small part) for American obesity, while Walter’s megalab resides in a squalid laundry outfit. Industrial capitalism’s messes are everywhere. Walter’s cancer illustrates the invasiveness of toxicity: the damage lives within the protagonist’s body, and threatens to kill him at any time.

Connecting Breaking Bad to the demise of the American middle class in general is relatively straightforward. Almost all of the major causes of American decline (and the collapse of its middle class) are featured, albeit most indirectly, in this show. The international, big-picture causes of American decline are (a) international violence, (b) the coddling of our parasitic upper class (resulting in, among other things, irresponsible deficit spending) and (c) our reliance on polluting, increasingly scarce, 20th-century energy sources, of which Breaking Bad features two of these three. The first is featured prominently, in the never-ending and international cycles of violence involving Gus Fring, the Mexican Cartel, Tuco, “the Cousins”, Hector, and Hank. The second of these decline factors (parasitic upper class) is shown indirectly– through Walter White’s turn to parasitism, the callous sleaze of Saul Goodman, the two-faced upper-middle-class respectability of Gus Fring, the illegal financing behind Skyler’s “American Dream” of owning a business, and the depressing failure of small business-owner Ted Beneke and his industrial-era enterprise.

Those are the big-picture causes of American decline, which aren’t terribly relevant to Breaking Bad and its laser focus on one man. The finer-grained and more individual causes of the American middle class are the “Satanic trinity”: education costs, housing, and healthcare. Each makes an appearance in Breaking Bad, and healthcare most prominently, with Walter being forced into crime by uncovered expenses associated with his cancer. Albuquerque did not experience the mostly-coastal housing bubble, so housing makes a passive cameo in the destruction of it: Ted Beneke is killed by his house, while Walter and Jesse’s hideout (a home Jesse would inherit if he were better behaved) is ruined it a rather morbid way– a body is to be disposed-of using acid, and Jesse uses the bathtub instead of acid-resistant plastic, destroying the tub, bathroom, floor and house in a blow-down of liquefied human remains. The third of these, education, is only featured at the show’s outset: White is a brilliant but uninspired and miserable educator, and Jesse is a failed ex-student of his. Education cannot be featured further in Breaking Bad because education focuses on the future, and all its characters reliably have is the present. This extreme and desperate present-focus is also what makes Walter’s family (aside from his clear desire to protect them) deeply uninteresting (to the viewer). Walter’s son is disabled but of high intelligence and possibly has an interesting future, but for now he’s just an expensive and typically difficult high-school student. His daughter is an infant, and Walter will almost certainly not see her reach adulthood. Walter may wish to give them a future, but for now his focus is only on immediate survival.

If Breaking Bad is about the decline of the American middle-class, it’s also about poison and the impossibility of containing it. Walter White begins the show with cancer– a parasitic clump of useless cells that kills the host by (among other things) releasing poison into the body. Realizing his life’s failure, and that he’ll be leaving his family with nothing, he begins his life of crime, which is also the only way he can afford treatment for his illness. Walter beats the cancer (for now, as of the end of Season 4) but in doing so he becomes the cancer. He’s fired from his job as a schoolteacher, signifying the end of his useful social function. He mutates into a useless “cell” and begins releasing poison, in enormous doses measured in hundreds of pounds per week of crystal meth, into society. What Breaking Bad also shows us is that toxicity can never be contained, at least not permanently. The acid intended to dissolve a corpse destroys a home. Hamlet, the notorious Shakespearean antihero, only killed eight. By the end of Season 4, Walter is indirectly responsible for a mid-air plane crash (that would have been national news if real) over a major city that killed 168 people, he has cost his brother-in-law his career and physical function, he has murdered several people, and he has placed his family directly in danger.

Mad Men and Breaking Bad may be entirely unconnected in their origins. These dramas exist in entirely different worlds, fifty years, half a continent, and at least one social class apart. The shows appear to have nothing to do with each other, aside from extrinsic aspects such as the cable channel (AMC) that distributes them. In fact, they’re connected only by the broad-but-thin story arc of American ascendancy, calamity, and decline.

There are two riddles posed in quick succession in Godel, Escher, Bach. The first is to find an eight-letter word whose middle four letters are “ADAC”. The second is to find an eight-letter word whose first two letters, and its last two, are “HE”. Presented separately, both of these are very hard. I can only think of one word that solves each. Presented together, the riddle becomes much easier. I have similar thoughts about the unspoken (and quite possibly unrealized by the shows’ creators) connection between these two television shows. These shows are only companions if one knows the rest of the story, which is that a historical incarnation of the American nation (20th-century, middle-class) was born in the time of one and died in that of the other. Otherwise, they could have been set in entirely different worlds.

As Mad Men advances, the optimism of the era becomes clear. An enormous rocket ship is launching, and the characters’ anxiety comes from the fear that they might not be (or might not deserve to be) on it. What they foresee least of all is what will actually happen. Catastrophic urban decay in the 1970s and ’80s, especially in New York? Yeah right, that’s impossible. Too much money in these cities. Investment banking (a bland WASP safety career) eclipsing advertising as “the” coveted (and far slimier) “prestige” industry? Not possible; those guys are morons. The re-emergence of America’s defeated, obsolete upper class in the 1980s, symbolized by the election of a third-rate, once-leftist actor to the presidency? Impossible; all politics is center-left (cf. Eisenhower, Kennedy, Nixon) these days. The rapid transition of the authoritarian, leftist, communist assholes into authoritarian, right-wing “neoconservatives” who’d get us into unwinnable wars in the Middle East? Just ridiculous. An emergent and militaristic American upper class representing more of a threat to national and world security than the Soviet Union, which will implode in the late 1980s? Insane concept. The world in which the Mad Men live is shaking, and the perception that it’s about to be destroyed is very real, but no one can envision how it will be destroyed, or why. None can foresee the coming misery or spot its sources; all eyes are on that rocket ship. In Breaking Bad, we see the wreckage after that rocket comes back to earth five decades later, slamming into the cold New Mexico desert, and annihilating the middle (class, and geographical center) of an entire nation when it lands.

Breaking Bad is more personal and ahistoric than Mad Men. Walter White doesn’t care about “America” or the middle class or what is going to happen to either, because he’s not trying to exploit macroscopic social trends; he is just trying to survive, and he exploits human weakness (an ahistorical source) because it’s there. Bad‘s backdrop is the decay of “Middle America”, just as Requiem for a Dream is set against the calamitous end of the New York middle class, but it’s a story that could be told in any time. Its relevance is maximized by its currency, and the involvement of health insurance (instead of say, gambling debt) as a proximate cause of Walter White’s turn to crime suggests the current era and an external locus of original fault, but the personal and tragic elements of Walter’s story could be given another backdrop.

Breaking Bad does, however, belong in a technological society such as ours just as Mad Men belongs in a time of colored-pencil creativity and innovation. Both Mad Men and Breaking Bad grapple with morality, but the latter does so more fiercely. Mad Men illustrates the application of mediocre artistic talent to a legitimate purpose of low social value, in the pursuit of vain narcissism and reckless ambition. Breaking Bad features the application of extreme scientific talent to outright evil, in the pursuit of mere survival amid a world that is collapsing both on a personal (Walter’s cancer) and national (health “insurance”) scale.

What do these television shows tell us about ourselves and our history? First of all, Mad Men is set amid the birth of an American society, but not the first one. The American nation (as a people, united by ideas, though those ideas have evolved immensely over time) has existed for over three hundred years, and been remarkably flexible and mostly progressive (the recent turn toward political corruption, corporate coddling, millenarian religious superstition, xenophobia, and even misogyny by the conservative movement is a blip, taking a historical perspective). Just as “America” (as a nation) existed long before there was a Madison Avenue, it will persist long after the flaming wreckage of “health insurance” (i.e. the business model of taking money from the well and robbing them when sick, because sick people don’t fight thieves but the well have the money) and methamphetamine and the storm of underemployed talent (Walter White, the failed chemistry genius; unemployed Ph.Ds in #Occupy) pass, as these calamities inevitably will. Mad Men is anxiously erotic while Breaking Bad is fatally thanatoptic, representing the birth and death of just one society that will exist on this soil, but one must keep in mind that these are bookends merely of a society, not the first or last one that will exist in this nation. Walter White will probably die in his 50s, but his daughter may see a better world.

An interesting question is, “What’s between these shows?” Fifty years pass between two television dramas that, aside from historical alignment, have nothing to do with each other and are, in many ways, opposites. What would a 1980s counterpart feature? The best answer that I can come up with is that none should exist. Spring and autumn are the beautiful, unpredictable, gone-too-fast seasons that remind us of our mortality. Mad Men is set in May, and there are a few great days in it, but the May it gets is mostly the sickly humid kind where each day is either damp and rainy or oppressively hot, and in which February’s chill (the Great Depression, the wars, Dick Whitman’s miserable childhood) remains in the elders’ bones and will be there forever. Breaking Bad occurs in November, but not the beautiful, temperate red-leaf kind associated with northeastern forests; this autumn is set in a featureless desert where it’s cold and overcast but will never rain. With that said, to feature this American era’s dog days (the 1980s) seems artistically irresponsible, largely because it was a time of very little historical value. That era, that feeble summer in which America’s old parasitic elite (whose slumber made the middle-class good times of Mad Men possible, who used lingering racist sentiment to re-establish themselves in the 1970s and ’80s, and whose metastasis damaged that middle class and made the misery of Breaking Bad inevitable) re-asserted itself, doesn’t deserve the honor. Just my opinion, of course, and I was born in that time so I cannot say nothing of value came from it. Still, to put a forty-year asterisk between these two eras seems highly appropriate.

If nothing else, no one wants to see the point where the rocket, badly launched and at a speed below escape velocity, reaches its zenith and begins careening toward Earth.

Talent has no manager

For a store clerk, his “manager” is a boss: someone who can fire the clerk. For a contrast, an actor’s “manager” works for the actor and can be fired by him. In one context, the manager holds power; in the other, he’s a subordinate. This isn’t a misused word, so much as the overloading comes from the shifting application of terms. In both cases the person is accurately described as a “manager”, but the relationships are utterly different– opposite, in fact. Why is this? It requires analysis of what it means to be a manager.

A manager is a person entrusted with decisions related to a resource that its owner cannot as easily handle. A hotel’s manager operates the hotel day-to-day; if the owner is a different person, he passively reaps the benefits. For an entertainment personality’s manager, the asset being managed is the person’s reputation and career. In both cases, if the owner of the asset decides that the manager is doing a poor job, the manager is replaced. Managers work for owners. That much is clear. In the context of a low-level employee like a store-clerk, the clerk doesn’t really own anything of value. His labor is replaceable. Although his supervisor is introduced as “his manager”, the reality is that this person is a manager for the store. This person is the store’s manager, and his boss.

The word “boss” (from the Dutch) replaced “master” in the early stages of the Industrial Revolution, because of the latter’s association with chattel slavery. In accord with the euphemism treadmill, “boss” eventually went out of favor as well, replaced by “supervisor”, which was replaced in turn by “manager”. Having a personal manager sounds a lot better than having a boss, but the latter is a more accurate and better term. It may be a blunt word, but it works well for the purpose.

The job of the boss is to represent the interests of the company, not employee. He or she cannot be expected to serve two masters, just as it would be inappropriate for an attorney to represent both sides of a lawsuit. The result of this is that employees often feel shafted when their “managers” fail to act as their advocates, instead preferring the company’s interests (or, at least, what the manager represents as the corporate interest) over the employee’s own. They shouldn’t feel this way. The boss is doing his job: the one he gets paid for. What’s unfair to the employee is not that his boss prefers the company’s interests over his, but that the employee has no advocate (except himself) with any power. Full responsibility for managing his talent falls on the employee.

Who is talent’s advocate? Generally, there’s none. Talent alone, one might argue, is not very valuable: experience, reputation, and relationships are usually required to unlock it. Because of this disadvantaged initial position, the person with the talent is expected to advocate for himself. Just as it’s dangerous to represent oneself in a court of law, it can be hard to negotiate on one’s own behalf when it comes to career matters. It helps to have an advocate who isn’t risking his personal relationships and reputation in the career process. So a lot of people don’t bother. Most people are underpaid by 10 to 50 percent because they are uncomfortable negotiating better compensation. Their bosses aren’t being evil; these people simply have no advocate and fail to represent themselves. For that, I think compensation is an arena in which employees are actually more fairly treated than in intangibles. Companies can’t legally renege on promised compensation, and basic negotiating skills are often all it takes for to get a fair shake there, but they can (and frequently do) use bait-and-switch tactics to lure the best people with promises of more interesting projects than what those people actually end up working on. This is a common way for companies to mislead employees into working for them, protected by the fact that no one wants a 5-month job on his CV.

In the workplace, talent is of high long-term importance. A company that can’t retain talent will face a creeping devaluation of its prestige, mission, and ultimately, its ability to succeed as a business. For this reason, there are a few progressive managers who advocate on the behalf of talent, at least in the abstract, because they know it to be important to the general interest of the company as much as it is for talented subordinates. This is admirable, but it should be considered an “extracurricular” effort, as it’s one that these managers take on at their own risk. When these efforts fail to show short-term (one quarter) results, the jobs of those who pushed for them end up on the line.

The reality is that this progressive attitude is quite rare. Most managers (who themselves lack advocates, except themselves) are just as worried about keeping their jobs as the people they manage, and aren’t comfortable advocating for interests other than those that they’re required to represent. Companies give lip service to “mentorship” and career development, but often these are just ad copy, not real commitments. What looks like a progressive company is usually an adept marketing department. Moreover, most workplace perks are pure vanity. “Catered lunches” are a nice benefit worth a few thousand dollars per year, largely provided to reduce lunch times and portions (people who eat out are served large portions and become measurably less productive for two hours). That’s not a bad thing, but it’s not given out of altruism. Moreover, perks like an in-office XBox or foosball table are just clumsily-applied band-aids. Real professionals go to work for the work, not the diversions.

As I said, the boss cannot (even if he’d so desire) advocate for subordinate talent because this would cause a conflict of interest between his professional duty to the company’s owners (or their proxies, who are his managers) and this ancillary role. It is also difficult, in an “lean” (euphemism for “we overwork managers”) environment where it’s typical for a manager to have 15 to 20 reports, for the manager to represent the interests of all the people under him. In practice, these “flat” organizations lead to necessary favoritism imposed by the clogged communication channels, while bosses who take “proteges” usually find that their disfavored subordinates decline in productivity and loyalty, which reduces the team’s performance on the whole. The result is that the manager must be disinterested and impersonal with all reports , so career advancement through typical channels is difficult if not impossible. “Extra-hierarchical” work (collaboration with people outside of one’s reporting structure) can be far more effective, because people tend to favor those who help them out but aren’t required to do so, but this effort also makes many managers feel threatened (it seems disloyal, and creates the appearance of someone attempting to engineer a transfer, and managers whose best reports are transferring lose face with their bosses).

If talent has no advocate, does this mean that the interests of talent are ignored? No, but they’re addressed in an often ineffective, far-too-late, way. A talented person’s best move, in 90 percent of organizations, is to find another job in another company. Of course, people are free to do this, and often should, but constant churn is bad for the organization, and leads to a long-term arrangement in which the needs and desires of talent are ignored: if employees are going to leave after 6 months, why invest in them? Alternatively, a talent revolt is often manifest in reduced productivity, which reduces talent’s leverage in negotiation and leads an organization to conclude that talented people are “troublemakers” and that hiring the best people isn’t worth it in the first place.

The position of talent is especially tenuous because it’s a dangerous asset to hold. If every thousand dollars in cash caused increased a person’s risk of mental illness and interpersonal failure by 0.01 percent just by virtue of existing, those who might be billionaires would either give the shit away or burn it. Of course, this isn’t the case. Tangible financial assets– real estate, wealth, ownership in productive enterprises– are largely inert in terms of “mana burn” (the tendency to inflict harm if unused). They are at constant risk of being diminished on the market, and this may be a source of anxiety for some people, but the only thing they can lose is their own value. Talent, on the other hand, becomes extremely detrimental if unused. A millionaire “trust fund kid” working jobs below his means (as an underpaid arts worker in Williamsburg, when his father could easily get him a “boring” but cushy and lucrative position as a junior executive) is not going to be especially unhappy working jobs that are “below him”, especially because the situation can be improved at any time. On the other hand, a person of high talent trapped in a mediocre career will only fall farther. Perversely, although it’s easier to find an advocate or manager for a building or a business, talent needs it more.

The role of “talent advocate”, I believe, is unfulfilled. A boss cannot fulfill this role without entering into a conflict of interest that endangers his career. Companies’ HR departments, I believe, are useless toward this purpose as well. HR has an “eros” (hiring and advancement) and a “thanatos” (firing and risk mitigation) component. The first of these sub-departments works for the company’s management: often they mislead people into joining teams or companies with undeliverable promises of career advancement and work quality, not because they are malicious, but because they do not have the resources (or the duty) to investigate promises made by the managers for whom they work. An in-house recruiter can’t be expected to know that a position being advertised as “a Clojure job” is 90% Java. The second half of HR works for the firm’s attorneys, finance department, and public relations office, and its purpose is (a) to encourage failing employees to leave the company before formal termination, (b) to prevent disgruntled or terminated employees from suing or disparaging the company in the future. As for the advancement of talented people already in the company, managers are trusted (not always wisely) to handle this on their own. This leaves nothing in a company’s HR department that can advocate for talent. It would, arguably, go against their professional duty for them to do so.

Talent needs an advocate independent of any specific company, since its best move is often to leave a disloyal or detrimental company outright. I believe that requirement of independence is quite clear, since companies’ obligations are to shareholders only and managers’ obligations are solely to their companies. (That most middle managers, in practice, place their career interests above both those of their subordinates and of their companies is an issue I will not address for now.) Independent recruiters, one thinks, might fulfill this role. Do they? My experience has been that I do better as my own advocate than when using a recruiter. As recruiters collect a percentage of a first-year salary, they aren’t incented to act in the employee’s or even the employer’s long term interests. They are paid for putting people in roles that last at least 12 months, but not for looking out for the employee’s career interests (which may involve a 10-year career at one company, or it might involve jumping ship almost immediately). Of course, there are good recruiters out there who truly value the long-term interests of the people they place; it’s just that my memory (and, to be fair, I haven’t used one since I was a 23-year-old nobody) is that there are far more ineffective or just plain bad ones, focused on quantity in job placement rather than quality. It’s not surprising for it to be this way, since job quality (holding a person’s level of skill constant) is only loosely correlated to compensation, based on which recruiters are paid. Since it’s companies that write recruiters’ checks, it shouldn’t be surprising where their alliances lie.

Talent may be more valuable than financial resources, but it’s harder to discover and it’s far more illiquid. A company can write a $25,000 check to a recruiter, while a talented person can’t easily pay the recruiter with “$25,000 worth” of talent. Financial assets can be sliced into pieces of any desired size that are useful to anyone, so recruiters can be paid with those. Talent can’t. A recruiter cannot feed his family with 100 hours’ worth of server software. (“Tonight, we’re having fried Scala with NoSQL for dessert.”)

A possible improvement would be for recruiters to be compensated based on the “delta”, or the amount by which they improve their clients’ salaries. This would be like the pay-for-performance model by which hedge fund managers are compensated: a small percentage of assets (usually 2%) and a larger percentage of profits (often 20%). In other words, instead of collecting a flat percentage of the first year salary (15%) the recruiter could be compensated based on the hire’s long-term performance. This might give recruiters a long-term incentive to place people in positions where they are likely to succeed for the long term. Would it encourage recruiters to fill the badly-needed role of talent advocate? I’m not sure. It might just incent recruiters to find high-paying but awful jobs for their clients.

One of the difficulties associated with the talent-advocate role is that it requires the ability to assess talent. Having a talent is generally a necessary, but not sufficient, condition for being able to detect it in others. What this means is that the best talent advocates are going to be people who, themselves, have those skills and abilities. Since currency of technical skills is highly relevant, it’s best that they keep their skills up-to-date as well. Talent advocates, in other words, need to have the talent they intend to represent in order to understand what people with that kind of creativity (a) are and are not capable of, and (b) need from an employer to be motivated and successful. This requirement that the talent advocate be involved in the work for which he advocates makes a full-time recruiting effort unlikely, but without a full-time effort, it’s unlikely that the talent advocate can acquire the connections (to employers) that are necessary to place people in the best positions. In short, this is a very hard role to fill. I can’t see an easy solution.

For the time being, talent must be its own advocate and its own “manager”. This leaves us with what we already know.

Emerging from the silence

I’ve been on Hacker News a lot in the past few weeks, and I’ve shared some personal details about companies at which I’ve worked and what I’ve seen them do. I don’t want to talk any more about that. Turning over the past is, in many cases, nothing but an excuse for neglecting the future. It can very easily become a trap. The desire to “get a firm handle” on the past, to assess the causes of social harms and unexpected evils, can be very strong, but it’s ultimately useless. Those people mean nothing now, so discard them from your mind.

What I find more interesting is the degree to which people will protect unethical behavior in the workplace, as if it were a common affair for people to damage careers, other teams, and even entire companies for their own personal benefit. As if it were somehow OK. As if it weren’t even worth talking about when people do obnoxious, harmful, and extremely costly things for their own short-term, personal benefit. As if “office politics” were an invariable fact of life. Perhaps this latter claim is true, but there are major differences in degree. A debate over whether a program’s interface should represent IDs as Longs (64-bit integers) or Strings can be “politics”, but these usually come from well-intentioned disagreements in which both sides are pursuing what they believe to be the project’s (or company’s) benefit. This is orders of magnitude less severe than the backstabbing and outright criminality that people, thinking they’re doing nothing wrong, will engage in when they’re at work.

I try to be a Buddhist. I say “try to be” because I’ve been failing for thousands of years and the process of getting better at it is slow. That’s why I’m still here, dealing with all sorts of weird karmic echoes. I also say “try to be” because I find religious identification to be quite shallow, anyway. In many past lives, I’ve had no religious beliefs at all, expecting death to be the end of me. It wasn’t. I’m here. This is getting divergent, so let me put it in two words: karma’s real. Ethics matter. There is no separation between “work” and “life”. People who would not even think of stealing a steak from the grocery store have no problem with actions that can cause unjust terminations, team-wide or even whole-company underperformance, and general unnecessary strife. How can this be accepted as consistent? It can’t be. It’s not. Wrong is wrong, regardless of how much a person is being paid.

I was brought up with a great work ethic, in all senses of the word. The consequence of this is that I’m an extremely hard working person, and I don’t have much patience for the white-collar social climbing we call “office politics”. It’s useless and evil. I find it disgusting to the point of mysophobic reaction. It has nothing to do with work or the creation of value. Why does it persist? Because the good don’t fight. The good just work. Heads down, they create value, while the bad people get into positions of allocative power in order to capture and distribute it. Most people are fatalistic about this being the “natural order” of the world. Bullshit. Throughout most of human history, murdering one’s brother to improve one’s social or economic status was the “natural order” of the world. Now it often leads to lifetime imprisonment. It may not seem so, but the ethics of our species is improving, although progress feels unbearably slow.

Recently, there’s been a spate of people exposing unethical behaviors at their companies. Greg Smith outed the ethical bankruptcy he observed at Goldman Sachs, while Reginald Braithwaite (Raganwald) issued a fictional resignation letter on the onerous and increasingly common practice of employers requiring Facebook walk-throughs as a condition of employment– a gross and possibly illegal invasion of privacy. Raganwald’s letter wasn’t just about privacy issues, but a commentary on the tendency of executives to wait too long to call attention to unethical behaviors. His protagonist doesn’t leave the company until it hurts him personally and, as Raganwald said of the piece, “this was one of the ideas I was trying to get across, that by shrugging and going along with stuff like this, we’re condoning and supporting it.”

Why is there so much silence and fear? Why are people afraid to call attention to bad actors, until they’ve after they’ve been burned, can be discredited with the “disgruntled” label, and it’s far too late? Is Goldman going to put a stop to bad practices because a resigning employee wrote an essay about bad behavior? Probably not. Are people going to quit their jobs at the bank when they realize that unethical behavior happens within it? Doubtful. Very few people leave jobs for ethical reasons until the misbehaviors affect them personally, at which point they are prone to ad hominem attacks: why did he wait to leave until it hurt him?

Human resources (HR) is supposed to handle everything from benign conflict to outright crimes, but the reality is that they work for a company’s lawyers, and that they’ll almost always side with management. HR, in most companies, is the slutty cheerleader who could upset the male dominance hierarchy if she wanted to, simply by refusing to date brutish men who harm others, but who would rather side with power. One should not count on them to right ethical wrongs in the workplace. They’re far too cozy with management.

Half the solution is obvious. As work becomes more technological, companies need technical talent. Slimy people (office politicians) end up steering most corporations, but we pedal. We can vote with our feet, if we have the information we need.

There’s been an American Spring over the past couple of months in which a number of people, largely in the financial and technological sectors where demand for technical talent is sufficiently high to make it less dangerous than it would usually be, have come out to expose injustices they’ve observed in the workplace. We need more of that. We need bad actors to be named and shamed. We need these sorts of things to happen before people get fired or quit in a huff. We need a corporate-independent HR process that actually works, instead of serving the interests of power and corporate risk-mitigation.

Here’s a concept to consider. We talk about disruption in technology, and this surely qualifies. This country needs a Work Court– an “unofficial” court in which employees can sue unethical managers and companies that defend or promote them. This Court won’t have the authority to collect judgments. It doesn’t need it. Awards will be paid out of advertising revenue, bad actors’ exposure (assuming they are found guilty) will be considered punishment enough, and it will be up to plaintiffs whether they judge it better for their reputations to be identified with the suit, or kept anonymous. What this provides is a niche for people to gain retribution for the low and mid-grade ethical violations that aren’t worth a full-fledged lawsuit (which can take years, be extremely costly, and ruin the plaintiff’s career). It also will remove some of the incentives that currently reward bad actors and keep them in power.

That, unlike another goofy “semantic coupon social-gaming” startup, would be an innovation I’d be interested in seeing happen.

Non-solicitation clause considered immoral

I hope I am not unusual in that, whenever I join a company, I actually read the Employee Agreement. One term that is becoming increasingly popular is the (generally overbroad) non-solicitation agreement, designed to prevent employees from “solicitation” of colleagues after they depart from the company. Their purpose is to prevent “talent raids” whereby well-liked people encourage their colleagues to move along with them to their next company. By doing so, they create a culture in which to present an employee with a better opportunity is treated as comparable to stealing corporate property.

First, let me state that the fear of “talent raids” is vastly overstated. Recruiting’s hard, yo. No one is “raided” from a company. A person does not get “stolen” from a $140,000-per-year software job and forced to throw 75 hours per week into a risky startup. People move from one job to another when they perceive a better opportunity at the new job. That’s it. As long as the latter opportunity is represented honestly and properly, nothing wrong is happening.

I’m a veteran talent raider. I’ve helped startups hire brilliant people I met in middle school through national math contests. I’m also extremely ethical about it. If I don’t respect someone, I don’t want him or her in an organization that I respect. If I do respect someone, I’m going to do everything I can do to provide all information (positive and negative) about an opportunity. This isn’t a bullshit “I’m not selling” defense. I consider it a success when I sit down with a friend, tell him about an opportunity that I think is great, give him all the information he’ll need to make his decision, and he appreciates the information but rejects it. It means that I’ve done my job well. When I’m talent raiding, I’m usually trying to convince someone to make a risky move, and most people don’t like risk. So a conversion rate of 60 percent (which would be very high) would suggest that I’m doing something seriously wrong. Overpromising to a person whose talents and character I respect is the last thing I want to do. Careers last a lot longer than jobs, and companies can decay so rapidly (management changes) that sacrificing a relationship to improve a job is just a terrible idea.

Unethical solicitation I despise. It’s unethical when the company’s prospects, the role into which the person will be hired, or the type of work the person will be allocated, are overstated. This is, of course, not a behavior limited to small companies or intentional talent raids. It’s very common for companies of all kinds and sizes (as well as managers within generally ethical large companies, as I found out recently) to pull these bait-and-switch antics. There’s no legal recourse against companies that do this– and there shouldn’t be, as companies have the right to end projects and change priorities. If contract law doesn’t cover bait-and-switch, then what is the socially useful purpose of non-solicitation clauses? Absolutely none.

Of course, companies don’t have non-solicitation clauses to prevent unethical talent raids, and those clauses aren’t in place to protect employees (ha!). In fact, these companies would rather see their people lured away on false pretenses: they can hire them back. They’re much more afraid of ethical talent raids in which the employee is presented with a genuinely better opportunity. This represents an attitude in which employees are considered to be property, and it constitutes restriction of trade.

Worse yet, non-solicitation contracts discourage startup formation. If the “talent raider” moves to a large company like Google, he can reconstruct his team behind the scenes, but if he moves to a startup, he faces the risk that the non-solicit will actually be enforced. History does not know how many great startups have never formed because people were scared off by non-solicitation clauses.

These need to be ended. Now.

Lottery design: how I’d do it.

Here are some thoughts I’ve had on how I would design a lottery if it were my job. I find computer programming and real game design to be more fulfilling than the design of these sorts of mind-hacking gambling games, which is why I would consort with a cactus before I’d ever work for Zynga. That said, someone has to do that job. Here’s how I’d do it if it were mine.

A toy example is the mathematically interesting, but impractical, “no-balls lottery” driven by “strategic luck” (to use a game-design term). It works like this: at each drawing, players choose a number between 1 and N (say N = 60). It costs $1 to play, and the payout for choosing the winning number is $N. Where’s the house edge? The winning number is not chosen at random; it’s the one chosen by the fewest number of people (with ties either split or resolved randomly). What’s cool about this lottery is that it has the appearance of being “fair” (zero expectancy, no house edge) but it produces risk-free profit for the house no matter what, because the winning choice will always be chosen by less than 1/N of the players. The more uneven the distribution of choices is, the better the house does. Game theory (Nash equilibrium) predicts that we’d see a uniform distribution of choices over time; I have no idea how it would actually play out. The house edge would unlikely be enough to cover administrative costs, but it’d be an interesting social experiment.

At any rate, the press loves lotteries. Not the small, reliable, boring kind, but the big ones. The $640 million jackpot for the MegaMillions has been a major news item of late– it’s the largest U.S. jackpot lottery in history. I even chose to play, mainly for epistemological reasons related to recent and extremely unusual events in my life that lead me to suspect supernatural trolling. Let me explain. There’s somewhere above a 99.999% chance (call this “prior” p) that the universe acts the way we think it does, and that there’s no correlation between balls drawn at a lottery and personal events in my life. It’s actually very likely that the correct value of p is much higher than 99.999%. I can’t put a true probability on it for the same reason I can’t put one on religious questions: probability enables us to reason about uncertainty of known structure, and this is about uncertainty of unknown structure. That said, there’s a 1 – p chance that the universe is deeply weird, that it has the tendency to troll the fuck out of people, and that playing the lottery right now (only right now, because this is a singular moment) might lead to profit. I don’t know what this “1 – p” is, but I’m willing to pretend, for the moment, that it’s high enough to buy a few lottery tickets.

(Technically speaking, the MegaMillions already has positive expectancy. This is practically irrelevant, as it is for the notorious St. Petersburg lottery. Almost all of that positive expectancy is concentrated in the extremely-low-probability jackpot and, between taxes, split jackpots, discount rates applied because lottery payouts occur over time and, far more importantly, the extreme concavity of the utility curve for money, I don’t know that it has a meaningful expectancy. I’m buying because, despite my scientific training, weird events cause lapses into superstition.)

A lot of people think the way I do, and the vast majority of them never win lotteries. People are very bad at managing low-probability events. Cognitive biases dominate. What actually ruined my interest in the lottery, as a child– my dad played about once a year, and let me pick numbers– was realizing that a Super-7 outcome of {1, 2, 3, 4, 5, 6, 7}, which “would obviously never happen” was precisely as likely as for me to pick winning numbers. (Actually, {1, 2, 3, 4, 5, 6, 7} is a bad play because of split jackpots. Dates are common fodder, so pick numbers 32 and higher if you want to minimize that risk.) Moreover, this type of magical thinking tends to surround the largest lottery jackpots, which appear “historic”. Of course, I know how silly it is to think this way, because large jackpots are nothing more than an artifact of Poisson processes and very long odds: it would be possible to build up a $5 billion jackpot (assuming people would play) just by designing the lottery so that the odds of winning are very small.

So if I were designing a lottery, how would I do it? Let me say that I’m ignoring the ethical question of whether I think gambling and the lottery are good things. I’m assuming the position of a person who thinks lottery gambling provides a social value. (My actual position is more uneasy.) I would not be happy to design some small scratch-off game. I’d want to build the lottery responsible for the first billion-dollar jackpot.

First, 7 balls is too many; the ideal number seems to be 5 or 6 (short-term memory). Two-digit ball ranges are also desirable, with 50 to 60 being typical. Numbers in the 50s seem moderate on account of the “inverse Benford effect”, whereby numbers with leading digits of 5 and 6 seem “moderate”. (Falsified financial figures tend to lead with ’5′ and ’6′ digits, although log-normally distributed real-world variables should lead with ’1′ over 30 percent of the time.) A typical 6-ball lottery, with 50 balls, gives odds of 15.9 million to 1. That’s clearly not enough. It might produce a piddling $30 or $40 million jackpot on occasion. Congratulations: you’ve earned ten months’ salary for an upper-echelon corporate scumbag (and by waiting in line for 3 minutes for that ticket, you’ve had to do more work than that well-connected blue-blooded shit has done in his whole life). Since it’s long odds that produce large jackpots, how do we push those odds into the billions?

The idea’s already there. Consider the 6-ball lottery that I described. The odds can be made 720 times longer by distinguishing or ordering the 6 balls. This is how the Powerball and MegaMillions work. One ball is distinguished as “special”. This makes the lottery more “fun”/engaging, and it makes the odds of choosing a perfect ticket longer. My target, in designing this lottery, is going to be to aim for odds in the 1- to 2-billion-to-1 range.

First, I think we’re ready for two distinguished balls, one red and one green. In fact, we’ll need that to get the kinds of long odds we want. The range for each is going to be 1 to 31. Why 31? Because, from a design perspective, it fits. One of the most common sources of lottery numbers is dates, so why are we cluttering up the card with these higher, less useful, ungainly numbers? For the other four balls, however, we need a wider range: 1 to 80. Yes, 80 puts us afoul of the “inverse Benford effect”, but we’re selling a premium product, so 80 is appropriate.  How many possible tickets are there? 80!/76!*4! * 31 * 31 = 1,519,898,380. With 80% of ticket revenues going into the non-discounted dollar amount jackpot (which means we’re actually only putting about 50% in, because we’re paying an annuity over about 25 years) we’ll be seeing billion-dollar jackpots on a regular basis. Not only that, but we’ll be seeing $1-billion non-split pots on a regular basis. For the first time ever, we’ll be minting billionaires from a lottery.

Of course, it’s the jackpots that bring ticket-buyers in, but it’s the small prizes that keep them coming back. The prize is a free ticket if you hit either the red or green ball (16:1), $100 if you hit them both (961:1). (A $100 payout on 1 in 1000 tickets, for jackpot lotteries, is unheard-of.) We’re paying 16 cents on each ticket there, but I think it’s worth it to keep continued engagement. We also want to make the second-to-top small prize large: $1 million for a ticket that matches the 4 white balls and one of the colored balls. The odds of that are 25 million to one, so we’re paying 4 cents per ticket there. We can shave expectancy on the middling prizes, which will be low compared to the odds against them. No one really looks at those, anyway. It’s the frequently-won small prizes, the second-best prize, and the jackpot, that actually matter.

Here’s why I think we should do this. Here’s the real ideology behind what I’m suggesting. I don’t care much either way about lotteries. Nor do I have a need to make that kind of money off people who, in general, need it more than I do. I do think Instant Games are a bit unethical (pathological gambling, and the fact that winnings almost always go into buying more scratch-off tickets, often on the same day) but also I think that, compared to alcohol, tobacco, and trans fats, lottery tickets are one of the less harmful things sold in most convenience stores. This said, the lottery I described is a starter in giga-lotteries: 1.5-billion-to-1 odds, three-digit millionaires and billionaires being made out of random people on a regular basis. Sure, the odds are very long, but most lottery players don’t give a damn about the odds: they’ll play as soon as they see $500-million jackpots, for the novelty. I do it, just to see the huge numbers. It’s gossip. But to paraphrase Justin Timberlake, a billion dollars isn’t cool (or won’t be, after it becomes commonplace). You know what’s cool? A trillion dollars. Or, at least, $85 billion or so. U.S. lottery revenues are about half that, but I think we can do more. Way more, once we establish a lottery where billion-dollar jackpots are the norm.

I don’t care here, as I said, about revenues. My goal isn’t to make money off of peoples’ cognitive biases with regard to low probabilities. It’s not to get rich. (Since I can’t legally implement this idea– only governments can– I never would.) Making money isn’t the goal. We should shove as much of our ticket revenue into the jackpot as possible. Rather, the goal is to make huge fucking jackpots. Two distinguished balls (one red, one green) is just the start. For the real act, we can have six different colors (white, red, gold, blue, green, and silver). This colorful lottery will be so engaging and so well-hyped (once $10-billion jackpots are old hat) that we can charge $5 per ticket. The ranges on each ball will be 1 to 84. We’ll need a lot of high-profile small prizes, and a two- or even three-tier jackpot system will be in order, as the odds of a perfect ticket will be 351 billion to 1. Top-tier jackpots will swell and swell and swell, building for months and growing exponentially. We might have to make this a world lottery to get a winner a couple times per year or so. But at some point, however, someone out there will win a huge amount of money. It will take a long time to get there, but a trillion-dollar jackpot will, at some point, happen.

What’s the redeeming social value of a trillion-dollar jackpot? Complete and total humiliation of the world upper class. Mockery of the world’s most destructive dick-measuring contest. A person, chosen completely-the-fuck at random, being catapulted to the top of the Forbes 500 for no fucking reason whatsoever. So fucking awesome. I would buy $1000 of these fucking lottery tickets every month and hand them out to the most undeserving randoms, solely in the quixotic, long-shot pursuit of the noble goal of humiliating every single private-equity asshole on Park Avenue at once by having a lottery player out-win all of them by orders of magnitude.

Is this evil? I’m not sure. I don’t actually want to see this experiment happen. I fully support humiliation of the existing upper class, but this sort of extreme lottery would just create a new upper class. The only difference is that this would be an elite made for no reason, as opposed to our current elite, which exists for mostly bad reasons. Moreover, I just don’t think it’s the best use of my time and talent to encourage mathematically naive people to pump trillions of dollars into a process of no social value.

The objection I have to it, again, isn’t gambling: we gamble all the time. Every blog post I write has an effect on my career– mostly positive, in that I can establish myself as knowledgeable about technology, progressive management, mathematics, software engineering, and computer science– but potentially negative, as well. This post, in which I describe the application of game design talent to an extremely perverted social project, is probably riskier to me than buying a few hundred lottery tickets. And this blog post, unlike the MegaMillions, has no chance of ever earning me $640 million.

There is another problem with this trillion-dollar lottery (TeraTrillions, and I am fucking trademarking that name). I am afraid that someone would fucking Occupy that shit.

Radical Transparency #1: who gets fired, how, and why.

Personal note: I wrote the bulk of this material in February 2011. I recently left a job (from which I was not fired) and admit the timing is less than perfect. There’s no connection between this essay and my personal situation, although its insights are derived from dozens of observations over the past 6 years.

I’m writing this post for a political purpose, not a personal one. Namely, I wish to provoke an epic showdown between the cognitive “1 percent” and the socioeconomic “1 percent”– a pleasant euphemism in the latter case, since it’s really 0.1% of Americans who are in access to make major decisions. I wish to force this conflict as fast as I can I believe technological trends are in our favor and we will win. It is not enough to defeat the socioeconomic “(0.)1 percent” as a class or as individuals. It is far more important that we dismantle the ugly world that they have created. If we destroy them personally but not the execrable processes they’ve set in motion, others will step up to replace each one we remove from power. We need to change the processes, which involves changing the discussion. Radical transparency is the first step toward doing this. 

I’m going to shed some light on the processes by which people are separated from companies. Involuntary termination. Layoffs. Getting fired. Shit-canned. Pink-slipped. There’s obviously a lot of nastiness surrounding this function, despite its absolute necessity to the health of an organization. I don’t intend to say that firing people is mean, or wrong, or unethical. Far from it, it’s something that all organizations will need to do from time to time. Some people are frankly toxic or unethical, and others are just not able to perform well within the organization, and must be let go.

In fact, I’d argue that the results of what happens when incompetent people are rarely fired can be seen in U.S. politics, where we face one of the most disastrously ineffective legislatures in human history because voters are so bad at firing idiots. Political officials are subjected to periodic elections (360-degree performance reviews) which are supposed to be intensely competitive. Yet the incumbent-victory rate is astonishingly high: over 95 percent in American political elections. The job-loss rate of political officials is less than 2 percent per year, despite their poor performance. In finance and technology, even people who are extremely good at their jobs have higher risk of involuntary termination than that. That contrast, I consider illustrative. For reasons like this, I will never say it is wrong to fire people, although it’s important to be decent about it. More on that later.

Two years ago, one of my friends was served with a Performance Improvement Plan (PIP) issued to him from a large company. If there is a TPS Report Museum, there must be an entire wing dedicated to PIPs. I’ll say one thing about these: they should never be used, and they don’t work. The first way in which PIPs fail is that they don’t work at improving performance. A manager who genuinely wishes to improve an employee’s performance will address the matter, one-on-one, with the employee in a verbal meeting (or series of meetings) where the intention is discovering the cause (“blocker”) of low performance, and decide either (a) to resolve this problem, if it can be done, (b) to accept transient low performance if the cause is a temporary one such as a health problem, or (c) to determine that the problem is irresolvable and terminate the employee (preferably in a decent way). On the other hand, written negative communication about performance (formalized most finally in a PIP) is universally interpreted as an aggressive move and will lead to distrust of the manager, if not outright contempt toward him. As soon as a manager is “documenting” negative performance, the relationship between him and his report has failed and he should progress immediately to a decent termination. Never PIP. Fire, but do it right.

What’s a decent termination? It’s one in which the employee is allowed to resign, provided with a positive reference (both in writing, to guarantee it, and verbally when needed) and a severance package equal to between 1 and 1.5 times the average duration of a typical search for jobs of that kind (between 2 and 3 months for junior-level positions, and 6 to 9 months for executives, and with an additional multiplier to 1.5-2 granted to those who are “overexperienced”) to compensate the employee for the unexpected job search. Of course, companies have no legal obligation to any of this, but it’s the right way of doing things. In fact, because financial pressures can result in a suboptimal job search, I’d argue that severance (when the cause issue is fit or performance rather than unethical behavior) is an ethical obligation.

Speaking of this, a lot of extremely unethical things happen in American workplaces, and that is a result not of “bad people”, but the bulk of this behavior comes from morally-average people who are scared. One of the things people fear most at work is a sudden, unjust, badly-structured termination that leads to a long-term career problem. This fear motivates a lot of the bad activities that occur in workplaces that lead to unsafe products and defrauded customers. The best thing a company can do for its culture, and for its macroscopically visible good citizenship, is to establish a policy of managing terminations in a proper way– to say that, yes, we’ll fire people whose poor performance constitutes an unacceptable cost to us, but we’ll always do so in a way that ensures that good-faith low performers (i.e. decent people who are a bad fit for their role) move on to more appropriate jobs.

How does a PIP actually affect performance? First, it destroys the relationship between the manager and the employee, who now feels “sold out”. If claims in the PIP suggest that others contributed to it, it may destroy the working relationship between the employee and his colleagues, causing isolation. PIPs usually carry a biased or even inaccurate summary of the employee’s work as the motivation for the Plan. Second, PIPs often generate a lot of additional work for the employee, making it harder to perform. A PIP usually contains deadlines equivalent to a 40-hour per week work schedule. This seems reasonable, except for the fact that many work demands are unplanned. An employee who faces responsibility for an emergent production crisis during a PIP will be forced to choose between the PIP work or the emergent crisis. A PIP’d employee ends up actually ends up with four conflicting jobs. The first is the work outlined in the PIP, which is already a 40-hour obligation. The second is any emergent work that occurs during the PIP period (which is usually unspecified in the PIP, but claimed to be covered by a vague “catch-all” clause). The third is the legalistic fighting of the PIP– the employee must contest false claims about his performance or the company will represent him as having agreed with them, which damages his position if he ends up in severance negotation. The fourth is the job search process, which must be commenced right away because the PIP usually ends in termination without severance.

This brings us to the real purpose of PIPs, which is to legally justify cold-firing employees. The PIP is the severance package. Defenders of PIPs represent it as cheaper (and it’s not, because of contagious morale effects) to keep a burned-out, miserable employee in the office for a month or two than to fire him and pay a 3-month severance package. Employees who are terminated under a PIP are usually granted no severance, because their termination is “for cause”. There’s a lot of intimidation that usually goes on here. Because the PIP is confidential, employees are usually told they will be breaking the law if they show the PIP to anyone, even an attorney. (Not true.) They’re also told that they can be terminated “for cause” (read: without severance) and that it will damage their ability to get a reference if they don’t sign the PIP, damaging their ability to contest it later. This one is true (employment is at will, and severance is not a legal obligation) but a company that gives a bad reference, especially for retaliatory reasons, will not fare well in court.

I was once privy to managerial statistics at a large company. Over a 12-month window during a time of average job fluidity, 82 out of 100 of PIP’d employees left their jobs within the duration (30-60 days) of the PIP. These might be judged to be “success” cases (they left) but if this was the desired outcome, it could have been suggested in an in-person discussion without involving nasty paperwork. In another 10 cases, the employee fails the PIP and is usually fired. In 6 more cases, the PIP is ruled “inconclusive” because of factual inaccuracies, or because the PIP’d employee is able to garner support from elsewhere in the organization, and transfer to a more sympathetic manager, who kills the PIP. Only 2 actually “pass” the PIP. Also, a PIP’d employee is likely to face another PIP within six months if he stays with the same manager, while most managers don’t want to take a chance on someone who faced a PIP (even successfully) because, statistically speaking, an unblemished new hire is a better bet.

My advice to PIP’d employees is as follows. As I said, you’ve got 4 jobs now: (a) the PIP work, (b) emergent work, (c) legalistic fighting, and (d) finding another job in the event of failing the PIP. Regarding the first, don’t blow off the PIP work entirely, or at least not openly. It may seem like it’s pointless to do this work (and it is, because your future at that company has ended) but one needs to show a good-faith effort in any case. PIPs are not guarantees of employment during the specified duration; they can be “closed” early if it is judged as obvious that the employee will not meet the PIP’s deadlines. However, in the rare case that the PIP is actually fairly structured and you can complete it in time, do not use the “surplus” to move forward on the PIP work. Instead, use this time to learn new technologies (for the job search) and contribute to open-source projects (only on your own resources). Even if you pass the PIP, your days are numbered. Regarding the second, request in writing that the manager place any emergent responsibilities on the PIP calendar. Now is the time to document everything and be a pain in the ass. If he ignores these requests, you can make a case to HR to rule the PIP inconclusive, buying you time until the manager’s next opportunity to be PIP (usually a quarter or two later) you comes along. You should have a new job before that happens. On the third of your four jobs– the legalistic wrangling– document everything, but keep communication terse, objective, and professional. Don’t spend more than an hour per day on it, but if you can waste your manager’s time with HR paperwork in the hope of getting the PIP ruled inconclusive, do it. The fourth of these– the job search– is where one’s highest priority should be focused. Represent interviews (in email; again, document everything) as doctors’ appointments. This is also a great opportunity to disclose health issues, which reduces the probability of being fired without severance.

My advice for companies or managers considering the use of PIPs. Don’t. Sure, you’ve got version control and backups, but do you really want an essentially fired employee in the office for a month? It’s amazing to me that companies have security escort employees out of the building when they are officially terminated (at which point the shock has passed and emotions are likely to be resolved) but keep them in the office for months on PIPs, when they are effectively fired. PIP’d walking dead are far more damaging to morale than terminations.

Finally, my advice for employees who actually get fired is simple. Don’t be a dick. Disparaging firm-wide emails will ruin your reputation, and property damage is illegal and fucked-up and wrong, and therefore not in your interest at all. Depart cordially. On references, there are two kinds: unofficial (which can be a peer you select) and official (manager or HR). Usually official references say very little, but have your reference from that company checked by a professional service and get an attorney involved if anything negative is being said about you. Cultivate positive relationships with colleagues after your exit. Most importantly, move on quickly. When asked about the previous job, stay professional and represent the termination as one that you initiated. If your ex-employer ever contradicts you and says you were fired, then congratulations: you get a huge settlement for a negative reference.

Also, don’t let it hurt your self-esteem. There are two rites of passage in the modern corporate world. The first is getting fired. The second is learning that getting fired is not usually a reflection on the employee (more on this later, on who gets fired). Your self-esteem will benefit if you experience the second before the first.

Ok, so that’s a bit about the process. It’s important for people to know this stuff, so they know how to box if the fight comes to them. Now, I’d like to shed some light on who gets fired. Low performers, right? Well, not always. Companies initiating impersonal layoffs for financial reasons will usually drop low-performing teams and businesses, but not all those people are individually low performers. Personal firings are a different story. These don’t occur because “companies” choose to fire these people; people do. So what are some of the motivations that lead to this? First, let me say that political (non-necessary) firings usually occur in the middle-bottom and middle-top of an organization’s performance spectrum. The bottom 10% are usually so used to lifelong, serial incompetence that they’ve mastered office politics as a survival skill. The top 10% are objective, visible high-performers who tend to have political clout on account of the value they add to the organization. The middle 60 percent rarely stick out enough to get into trouble. This leaves two deciles– the 2nd and the 9th– that are most exposed to personal terminations.

Second-decile firings tend to be “trophy firings”, initiated by managers who wish to impress upon higher-ups that they “take action” on low performers. They aren’t the worst people in the organization, and whether it’s of economic benefit to retain them is not strongly resolved either way, but they’re weak enough that if they leave the organization, they won’t be missed.

Ninth-decile firings are “shot-from-the-bridge” firings, initiated against a high performer by a jealous peer before she develops the credibility and capability to become a threat (i.e. graduate into the objectively and visibly high-performing 10th decile). These people are usually fired because they’re creative, passionate, and opinionated. Now, managers don’t initiate ninth-decile firings, at least not willingly. Managers want more people to be more like their 9th- and 10th-decile reports. Rather, those firings to be initiated by “young wolves”, who are typically same-rank co-workers, but sometimes insecure middle managers, who sabotage the target’s ability to perform. The textbook young wolf is Pete Campbell in the early seasons of Mad Men, the slimy and machiavellian junior account executive who lies and threatens his way to the top. (Disclaimer: I wish the term for such people wasn’t young wolves. I like wolves. I like them better than most people.)

A direct confrontation with a manager about a 9th-decile report is not a politically wise idea, because managers know that 9th-decile employees are highly valuable. So young wolves focus on giving the target some disadvantage in project allocation. Most technical teams have a “people manager” (who is savvy to political machinations) and a technical lead (who is not usually politically savvy, but has authority to allocate projects, make technical decisions that affect the team, and delegate work). Tech leadership is a hard role because (a) it involves a lot of responsibility with minimal glory, and often no additional compensation, and (b) tech leads are responsible as individual contributors in addition to their leadership role. Thus, the tech lead is usually overburdened, and young wolves usually cozy up to him, with the intention of “helping out” on work allocation. Then they start delegating impossible, discouraging, or unsupported work to the target, the 9th-decile employee they want to shoot from the bridge before they become 10th-decile. When the target’s performance drops (transiently) to about the 7th or 8th decile, they begin to speak about how she “does good work, but isn’t as good as we thought she’d be”. As she sees leadership opportunities and the most interesting projects receding from view, her motivation declines further. By the time she’s at the 5th or 6th decile (because she’s now beginning to look for other opportunities, and prioritizing work that benefits her external career) the party line is that she’s “not giving her best”, which is, by this point, entirely true and equally irrelevant to whether it’s economically beneficial to employer to retain her. Social rejection of her is encouraged through gossip about her being “not a team player” and whispered suspicions that she’s looking for employment elsewhere. When she enters the 3rd or 4th decile, the young wolves engage her manager and– if the manager is unaware of what’s happened (a young-wolf situation)– she receives the first official warning signs, such as negative feedback delivered over email instead of in spoken form. This is an aggressive move that confirms this now-declining employee’s suspicions that her future there is over. If she’s still working there (unlikely, in a decent economy) by the time she falls into the 2nd decile, she’s now a trophy-fire.

This process of targeted, intentional demotivation, prosecuted not by managers (whose interests are aligned with the success of the whole team) but by young wolves, can occur over years or over days, but it’s a discernable pattern that I’ve seen play out dozens of times. The problem is that “flat” managerial arrangements don’t work. I don’t like the alternative (official hierarchy) much, so I wish they did work, but they almost always fail. A “flat” organization means “we’re large enough that we need hierarchy but we’re too socially inept to have hard conversations”. In most cases, I actually think that large companies (for post-industrial knowledge work) are inherently broken, because neither the hierarchical model nor the flat model results in good products. Why is “flat” so bad? It means that a manager has a ridiculous number of reports (sometimes over 50) and that it’s impossible for him to be on the lookout for young-wolf behavior, which is immensely damaging to the company because it tends to drive out the most creative individuals. Young wolves thrive in environments of managerial inattention.

How does one combat a young wolf within an organization? As with any progressive, but curable, parasitic infection, early detection is the key. A decent manager who identifies a young wolf will send it to the pound and have it put down, but effective young wolves evade managerial detection for a long time. Defining the term helps, but there’s a danger here. If “young wolf” enters the business lexicon, it will just become a term of disparagement that young wolves use for their targets. The most effective young wolves, instead of declaring their targets “not a team player”, will instead attempt to paint their targets as young wolves. So here, I don’t have the answer, but I have a thought.

I’ve been coding for five years, and I’ve made a serious mistake. At this point, I have no open source contributions. Two and a half years I put into a failed startup that should have allowed me to open-source my work (30,000 lines of Clojure code to build a custom graph-based database, and some of the best technical work of my life) but because the CEO of that failed startup is a vindictive asshole who wishes to punish me for “leaving him”, so that work is gone. For this entire five years, I was a “company man”, pumping 50 to 60 hours per week of software effort into companies with which I no longer have a relationship. I have a “great resume”, but nothing in the open-source world to show that I can actually code. I’ll be just fine, and I’m going to take some time to write some open-source work, but that’s a serious regret I have over how I’ve spent the last half-decade. Why’s open-source software so important? Radical transparency. It provides a socially beneficial mechanisms for programmers to demonstrate, objectively, high levels of skill. It provides sufficient job security to the best programmers that they can fearlessly avoid most office politics and focus on work, which generally makes the quality of that work a lot higher.

I think radical transparency should apply within large workplaces as well. Many middle managers discourage line (i.e. non-managerial) employees from taking initiatives that would make them visible to other departments, for the fear that it would make them a “flight risk”. Managers of undesirable projects are aware that external visibility for their reports will lead to transfers, and managers of average projects are often insecure that their projects might be undesirable. Moreover, many companies discourage unmetered work, such as education, career development and extra-hierarchical collaboration (i.e. helping out other teams) which further isolates line employees. The result of this is that, for many employees, the manager is literally the only person who knows what the employee is doing. This gives the manager free rein to represent his opinion of that employee (which, if the manager has 25 reports, is very probably unreliable) as objective fact. Human Resources (HR) is supposed to ameliorate conflicts that occur when this leverage is abused, but the truth about HR representatives involved in manager-employee disputes (and terminations) is that they work for the company’s lawyers (and that’s why they think PIPs are a good idea) and no one else. At any rate, this pattern of dysfunction and outsized managerial power is sometimes called “manager-as-SPOF”, where SPOF is a technical abbreviation for “single point of failure” (the weak point of a system).

Manager-as-SPOF, isolation, and a lack of transparency about who is doing what allow socially manipulative twerps (young wolves) to succeed. In this kind of environment, 9th- and 8th- and 7th-decile performers get almost no visibility and are defenseless against a political campaign against them. The solution is to make the contributions of these employees visible. It’s to make everyone’s contributions visible and to foster an environment that makes many of the worst manifestations of office politics untenable. It’s to encourage employees to share their contributions with the whole company, and to work on things that benefit all of their colleagues– not just their managers.

A future RT post will focus on this: how to construct a radically transparent workplace, and what it would look like.

Why you can’t hire good Java developers.

Before I begin, the title of my essay deserves explanation. I am not saying, “There are no good Java developers.” That would be inflammatory and false. Nor am I saying it’s impossible to hire one, or even three, strong Java developers for an especially compelling project. What I will say is this: in the long or even medium term, if the house language is Java, it becomes nearly impossible to establish a hiring process that reliably pulls in strong developers with the very-low false-positive rates (ideally, below 5%) that technology companies require.

What I won’t discuss (at least, not at length) are the difficulties in attracting good developers for Java positions, although those are significant, because most skilled software developers have been exposed to a number of programming languages and Java rarely emerges as the favorite. That’s an issue for another post. In spite of that problem, Google and the leading investment banks have the resources necessary to bring top talent to work with uninspiring tools, and one willing to compete with them on compensation will find this difficulty surmountable. Nor will I discuss why top developers find Java uninspiring and tedious; that also deserves its own post (or five). So I’ll assume, for simplicity, that attracting top developers is not a problem for the reader, and focus on the difficulties Java creates in selecting them.

In building a technology team, false positives (in hiring) are considered almost intolerable. If 1 unit represents the contribution of a median engineer, the productivity of the best engineers is 5 to 20 units, and that of the worst can be -10 to -50 (in part, because the frankly incompetent absorb the time and morale of the best developers). In computer programming, making a bad hire (and I mean a merely incompetent one, not even a malicious or unethical person) isn’t a minor mistake as it is in most fields. Rather, a bad hire can derail a project and, for small businesses, sink a company. For this reason, technical interviews at leading companies tend to be very intensive. A typical technology company will use a phone screen as a filter (a surprising percentage of people with impressive CVs can’t think mathematically or solve problems in code, and phone screens shut them out) followed by a code sample, and, after this, an all-day in-office interview involving design questions, assessment of “fit” and personality, and quick problem-solving questions. “White board” coding questions may be used, but those are generally less intensive (due to time constraints) than even the smallest “real world” coding tasks. Those tend to fall closer to the general-intelligence/”on-your-feet” problem-solving questions than to coding challenges.

For this reason, a code sample is essential in a software company’s hiring process. It can come from an open-source effort, a personal “side project”, or even a (contrived) engineering challenge. It will generally be between 100 and 500 lines of code (any more than 500 can’t be read in one sitting by most people).  The code’s greater purpose is irrelevant– but the scope of the sample must be sufficient to determine whether the person writes quality code “in the large” as well as for small projects. Does the person have architectural sense, or use brute-force inelegant solutions that will be impossible for others to maintain? Without the code sample, a non-negligible false-positive rate (somewhere around 5 to 10%, in my experience) is inevitable.

This is where Java fails: the code sample. With 200 lines of Python or Scala code, it’s generally quite easy to tell how skilled a developer is and to get a general sense of his architectural ability, because 200 lines of code in these languages can express substantial functionality. With Java, that’s not the case: a 200-line code sample (barely enough to solve a “toy” problem) provides absolutely no information about whether a job candidate will solve problems in an infrastructurally sound way, or will instead create the next generation’s legacy horrors. The reasons for this are as follows. First, Java is tediously verbose, which means that 200 lines of code in it contain as much information as 20-50 lines of code in a more expressive language. There just isn’t much there there. Second, in Java, bad and good code look pretty much the same: one actually has to read an implementation of “the Visitor pattern” for detail to know if it was used correctly and soundly. Third, Java’s “everything is a class” ideology means that people don’t write programs but classes, and that even mid-sized Java programs are, in fact, domain-specific languages (DSLs)– usually promiscuously strewn about the file system because of Java’s congruence requirements around class and package names. Most Java developers solve larger problems by creating utterly terrible DSLs, but this breakdown behavior simply doesn’t show up on the scale of a typical code sample (at most, 500 lines of code).

The result of all this is that it’s economically infeasible to separate good and bad Java developers based on their code. White-board problems? Code samples? Not enough signal, if the language is Java. CVs? Even less signal there. The result is that any Java shop is going to have to filter on something other than coding ability (usually, the learned skill of passing interviews). In finance, that filter is general intelligence as measured by “brainteaser” interviews. The problem here is that general intelligence, although important, does not guarantee that someone can write decent software. So that approach works for financial employers because they have uses (trading and “quant” research) for high-IQ people who can’t code, but not for typical technology companies that rely on a uniformly high quality in the software they create.

Java’s verbosity makes the most critical aspect of software hiring– reading the candidates’ code not only for correctness (which can be checked automatically) but architectural quality– impossible unless one is willing to dedicate immense and precious resources (the time of the best engineers) to the problem, and to request very large (1000+ lines of code) code samples. So for Java positions, this just isn’t done– it can’t be done. This is to the advantage of incompetent Java developers, who with practice at “white-boarding” can sneak into elite software companies, but to the deep disadvantage of companies that use the language.

Of course, strong Java engineers exist, and it’s possible to hire a few. One might even get lucky and hire seven or eight great Java engineers before bringing on the first dud. Stranger things have happened. But establishing a robust and reliable hiring process requires that candidate code be read for quality before a decision is made. In a verbose language like Java, it’s not economical (few companies can afford to dedicate 25+ percent of engineering time to reading job candidates’ code samples) and therefore, it rarely happens. This makes an uncomfortably high false-positive rate, in the long term, inevitable when hiring for Java positions.