We’re almost finished. There’s a lot that I might add to this series in the future, and I’m seriously considering the book idea, but this series has taken a lot of time and work, and there are some technical projects I want to start in the spring. Here’s the roadmap I see for the next three posts:
- Part 20: This one. Flesh out what simple trust is, why it isn’t so simple, and how to defend it.
- Part 21: Review the economics of the problem (convexity, technological-era needs) so we can define what “Solving It” looks like.
- Part 22: The final Solve It post, where I discuss what I think is the missing piece in all of this.
To recap where we’ve been, I’ve spent so much time on trust. Here’s what I’ve covered on the topic this far:
- Part 17: Financial trust. Do investors and employees feel that the compensation structure is fair to everyone? I advocated a structure that encourages transparency, that does not disallow but limits “HR expedient” overcompensation, and avoids a lot of the problems with traditional equity compensation (e.g. liquidation preferences, disalignment of incentives).
- Part 18: Industrial trust. Are people trusted in their use of time? Since the productive value of time can scale either progressively or haphazardly with planning, the idea is to create a culture of intelligent (self-executive) time management and self-improvement. That delivers more yield (and is more scalable) than traditional micromanagement.
- Part 19: Why dishonesty is so common in organizations. As trust sparsity set, people need to make unrealistic or highly contingent promises in order to get anything accomplished, but this “convex” deception is simply taken to prove the need for further distrust of employees. Also, it creeps; what starts as “stone soup” (mutually beneficial convex deception) evolves into fraud.
These issues are critically important. The MacLeod degeneracy is an artifact of trust sparsity and dishonest risk transfers. If we want to avoid it, we have to learn how to build trust-dense organizations. That’s an incredibly hard thing to do.
There seem to be three ways humans have been able to get large projects done; call them “sources of power”. The first is coercion: someone with a weapon or military backing decides that work will be done and forces people to do it. That’s mostly illegal, and good riddance to it. The second is divination. You don’t have to believe in supernatural beings for it to work; random chance is just fine. Aaron Brown, in The Poker Face of Wall Street, makes a good case for gambling as having had a necessary function on the American frontier. It allowed poor but ambitious people to form pools of capital that could be used to finance large projects. It didn’t matter who the leader was, for most of these, but it needed someone. Capital inequality, for its obvious flaws and injustices, fulfills that. Self-favoring divination is a subcase called arrogation. The third is aggregation, which is how we make choose political leaders (voting) and valuate companies (markets) and it also is supposed to determine how companies run themselves, at least at the upper levels. Aggregation involves voluntary action (unlike coercions) and decisions that must be made with defensible reasons (unlike divination) and that requires trust, both in the processes and the people.
When there isn’t trust and aggregation fails, people tend to default to one of the other two sources of power. Ineffective managers fall back on coercion, while convex fraud can be viewed as an attempt to create a divination process that favors oneself (arrogation). No one would disagree that this stuff is socially and emotionally toxic. Is it, however, unprofitable? In the technological economy, wherein the relationship between input (morale, effort, skill, and talent) and output (economic value rendered) is convex, the answer’s a resounding “yes”. One of the traits of convexity is that small differences in conditions produce large variations in output, and distrusting employees doesn’t just shave a little off the top, as it would in a concave world. It hamstrings them, and everyone loses.
What is simple trust?
Trust is a multifaceted and complicated trait of a relationship. There are degrees of it. There’s a lower bar to trust someone with $10 than with $10,000,000. Likewise, there are people whom I’d trust to do the right thing on big issues, but wouldn’t leave small stuff (that they might ignore) to them. I’d rather do it myself. Additionally, there’s the matter of domain-specific competence. I’m not a doctor, so if you trust me to perform plastic surgery on you, then you’re making a mistake. This is pretty complex and specialized. I have good news: we don’t have to peer into that stuff. Simple trust is, well, much simpler. Is this person trusted to be decent and intelligent or, to use the business lingo, “credible”? Simple trust doesn’t require a foolhardy faith that someone will get everything right, but it assumes that he will try to do his best, communicate shortfalls, and accept feedback.
Simple trust is binary
What’s important about simple trust is that it’s binary. For the larger topic of trust, there are degrees and nuances. Simple trust either exists or it doesn’t. Here’s a barometer for simple trust: would you give this person an introduction? Would you trust him to give you a fair recommendation? There’s a quick, emotional “yes” or “no” that comes forth in, at most, a couple hundred milliseconds. “No, not that idiot!” “Yes, of course! Great guy!” That’s where simple trust lives. More complex nuances of audit structure, project/person fit, and contractual provisions– all of those being more ratiocinative and deliberate– come later.
In software, we refer to simple trust as the “bozo bit”. If the bozo bit is off, then a person is going to be held to have valuable input. That doesn’t mean that his ideas will never be rejected, but they will be heard and considered. He’ll have influence, if not power. When the bozo bit’s on, that person’s input is ignored and he’ll be viewed as a source of trouble and potential failure. In management culture, the exact same dynamic has been given the name “flipping the switch”. That’s when a report is prematurely judged to be incompetent and therefore ignored wholesale. When a manager flips the switch, the employee becomes A Problem, even if there’s no objective reason to view him that way.
Simple trust is symmetric
In general, simple trust is symmetric. People size each other up and generally figure out, on a subconscious level, where they stand on this matter. There are some cases of cluelessness that enable asymmetry, but they’re rare. That’s why a bad reference is so damaging to a person’s job search. It means that a candidate put simple trust in someone who thought she was an idiot. Either no one likes her and she’s back into a corner, or she’s a terrible judge of character. This is different from other emotional affinities or revulsions. For example, there are probably people I like who don’t like me, and vice versa. Admiration driven by social status is asymmetric, almost by definition. More nuanced varieties of trust (driven by information specific to the parties) can be asymmetric. Simple trust rarely is. It’s reciprocal.
When I was involved in recruiting, I noticed that even though most candidates had nothing to hide, people hated giving direct managers for references. Actually, I think this fear is often misplaced. Managers are trained in not-getting-sued so bad references from them are rare– they tend to fade to neutrality from both sides– and variance-reduction in the reference-checking process is generally desirable, since it only occurs when the decision to hire is nearly made (and will only be unmade by a surprisingly bad finding). Still, it’s understandable that people wouldn’t want ex-bosses in their continuing careers. Managerial authority tends to make simple trust impossible. That’s why the relationship is so awkward. The manager’s job is to look for causes of potential failure, especially in people. Fault-finding is her responsibility, as most companies define it. That, I would say, is the antithesis of simple trust. When there’s simple trust, you size people up for upside capability because you assume they’re not going to fail you in bad faith. When there isn’t, you start looking for breaking points: how far would this person need to be pushed before he became useless?
Why is simple trust symmetric? Because the lack of it is a negation of the person. Most people understand and respect that no one is going to enter a $20-million deal with them on their word alone, but when there’s a lack of simple trust, the relationship becomes adversarial. The absence of simple trust becomes fight or flight. Most of the corporate pyrotechnics we know and love come from the cases where “flight” is rendered impossible.
Simple trust is systemic
In the general and more complex sense, trust is not (nor should it be) transitive. Trusting someone does not mean trusting everyone she trusts. An ungodly number of parties have been ruined by invited people inviting people… who invite other people. Simple trust generally tries to be transitive. If Alice trusts Bob, and Bob trusts Eve, it’s unlikely that Alice will view Eve as a complete waste of her time. Alice’s bozo bit will probably be 90% of the way to “off” based on Bob’s recommendation alone. That’s why introductions are so important as a social currency; they are the process of simple trust transitivity.
If a relationship is (mostly) symmetric and transitive, then it will tend to coalesce into graph-theoretic cliques (clusters of nodes where each pair is directly connected) and those are often realized in human social cliques. Within the clique, there’s simple trust between all the members. They may not care for each other personally, nor agree very often, but they respect each other enough to work together. Outside of such cliques, simple trust is uncommon. This encourages the formation of an “us” and a “them”. How big can these trust-dense cliques get? No one knows, for sure. They can scale to hundreds of people, but that’s rare. Typically, simple trust among an unstructured set of people will congeal into a state where there are many small, trust-dense cliques amid an overwhelmingly sparse general graph. That’s what we see in high schools and prisons, and it seems to be a fairly natural human state in the absence of effective leadership.
Within a group, there will be a general character of trust sparsity or density, and it’s a binary systemic property. That is, a trust-sparse company will be sparse almost everywhere, while a trust-dense company will be uniformly dense, which is experienced in a sense that “everything just works”. Trust-dense environments may have an occasional “broken link”, but they’re often self-repairing and can adapt: those two people just won’t be made to work together if it can be avoided. Trust-sparse environments tend to generate small cliques that view each other with suspicion and, sometimes, outright adversity. Managers in trust-sparse companies tend to prize the “unicorn” who can become trusted in multiple cliques (because that’s so rare) who might be able to merge them into a larger coalition, but usually such a person is thrown out by one of the two.
This can often be understood as a sort of organizational self-efficacy. When a company is in a trust-dense mode, there’s an understanding that work done within those walls will, most often, be done by capable people who know what they’re doing. People, then, can specialize in what they do best and hand over work that is better done somewhere else. In trust sparsity, companies get a “none of our shit is any good” attitude that leads to duplicated effort, communication failures, and waste. That’s the point at which companies find themselves buying modestly talented software engineers at $5-million-per-head panic-priced “acqui-hires”; they have much cheaper talent in-house, but can’t find it because of the sense that the company is full of bozos and good people are so rare that it isn’t worth the effort to discover them.
Trust, teamwork, and teamism
Theories X and Y represent the extremes of the trust graphs. Theory X has everyone as an isolated point in a Hobbesian wilderness. Theory Y attempts to realize the complete graph: that is, the organization is a single, trust-dense, clique with everyone in it. That works extremely well, until a few “bad apples” take too much advantage. Theory X was the old, original-sin management style inspired by millennia of labor reliant on coercion rather than voluntary aggregation. Theory Y emerged around 1920 with Henry Ford’s model of paternalistic capitalism, and was destroyed in the “greed is good” 1980s when cocaine-fueled yuppies sold their employers’ secrets to get private equity jobs. Theory Y trusted employees too far, because it didn’t account for the return of pre-1945 economic inequality and the higher stakes of the post-Reagan world. What we have now, for the most part, is Theory Z teamist management that acknowledges, but attempts to control, the natural tendency of humans to form trust-dense cliques amid prevailing trust sparsity. It’s closer to Theory X than Y, but it acknowledges the usefulness of team cohesion.
In the Theory Z world, everything is a team. The rent-seekers call themselves “the management team” or (get your vomit bag out) “the leadership team” in official documentation. There’s even a “Termination Approval Committee” in many companies– a firing team. There are project teams and inter-team “special teams” and within-team subteams. Finally, if you paid attention in your HR-mandated management classes and know you can’t call someone a “retard” or “fag” in his or her performance review, you say “not a team player” instead. They both mean the exact same thing: I don’t like that person and will say the nastiest thing I can get away with. It’s just that “not a team player” has a more conservative and HR-appropriate sense of what one can get away with.
There are two problems with Theory Z. One is that teamism often tends toward mediocrity. I might seem “pro-convexity” with my technological-era cheerleading, but concavity has a major virtue, which is that it favors equality. Let’s say that you have six “points” of resources to give three employees and their productivity will be as follows:
| Input | A Payoff | B Payoff |
+-------+----------+----------+
| 4 | 125 | 600 |
| 3 | 120 | 250 |
| 2 | 100 | 100 |
| 1 | 60 | 25 |
| 0 | 0 | 0 |
+-------+----------+----------+
For A, the concave task, the optimal allocation of the 6 points is equality: give 2 points to each, producing 300 points. That’s the “team player” world in which it’s better to get mediocre exertion from all team members. However, for B, the optimal arrangement is to give one employee (the “favorite”) 4 points, with the second (the “backup”) getting 2 points, and a third (the “loser”) getting none, which produces the maximum payoff of 700. It doesn’t even matter, in this examples, who’s more talented. Convexity just naturally favors inequality in investment. That’s why the second and often dishonest source of power (divination) worked for the convex process of business formation before there was modern finance (aggregation). It just needed to pick a winner.
Does all of this mean that teamwork is completely antiquated in the convex world, and we should just find ways to delegate convex work to highly-promising individuals, and starve (i.e., fire) the rest? Absolutely not. The assumption in the above model is of an additive relationship where the value produced by the business is the sum of individual productivities, and that’s not a good assumption. Work is only of value if it provides benefit to other people and that generally means that employees who only value their individual productivity become detrimental. The best programmers aren’t the ones who commit the most lines of code, as if it were some commodity product, but the multipliers who make the whole team better by writing good code, teaching people how to use assets created, and generally remove obstacles. Thus, teamwork is not antiquated. Far from it.
The input/output relationship for well-structured team endeavors is also convex, so let’s return to the “B Payoff” function for convenience, and assume we have 4 players who can each contribute 1 point of effort to a project. If they each build their own thing in isolation, each puts forth a yield of 25 points, for a total of 100. None of them accomplished much, working alone. On the other hand, if they work together and put a solid, focused 4-point effort, the yield is 600. This sort of team synergy is real and, when it happens, it’s powerful.
What’s wrong with Theory-Z, closed-allocation, teamism? It’s not well-structured. It’s not self-organizing. It almost never produces team convexity (synergy). Why? The fundamental flaw of closed allocation is the conflict of interest between project leadership and “people management”. The closed-allocation middle manager is held responsible for the success of a project, while also auditing the work and career progress of a heterogeneous set of people, whose best move might be to another project. When there are conflicts, which side wins? The project has buy-in from upper management, or there wouldn’t be someone assigned to run it. The people have no vote, given the manager’s ability to unilaterally zero their credibility.
In a closed-allocation environment, the only thing that people on a “team” share is that they report to the same manager. They’ve been dropped by fate in the same place. Nothing more. They didn’t choose each other, they probably didn’t choose the work, and they have very different career goals. Part of the corporate Lie is that they’ll subordinate their agendas to the organizational mission, with the defectors either being vacuumed up into upper management (MacLeod Sociopath) or flushed out of the company, but only a few (MacLeod Clueless) really buy into that. Most just fake it to get by. What does all of this mean? It means that they’re not really a team.
Genuine team synergies can usually only be discovered at the grass-roots level. Corralling a set of people together and saying “Be a team, now!” doesn’t often work. Moreover, what is the only thing this “team” has in common? Their manager, who is trying to build a team while ensuring that he remains its leader. That’s a real problem. The most valuable group members are people who will lead if needed, but don’t mandate that they fill the top position– people who care more about executing the right ideas than executing their ideas. When someone tries to form a social group but keep its existence contingent on his inflexible superiority, the others don’t like it. The only thing that can become common between them, in this case, is their dislike for him.
One trait of Theory-Z teamism is that managers don’t have perfect unilateral credibility. Under Theory X, labor is material that can be flushed out if judged defective for any reason, so managers do have total credibility. Theory Y softens managerial power considerably by granting implicit credibility to all employees. Theory Z is closer to X than Y, but it has introduced “360-degree reviews”, meaning that while a single individual cannot overthrow a manager– HR will side with the boss, and she’ll get fired for even trying– but a whole team, if they work together and tell the exact same story, can. Knowing this, the last thing a typical corporate manager wants is for his reports to form a genuine team. Isolation and division are necessary to keep the manager’s job secure.
The need for trust density
Convexity requires attention to variables such as project/person fit and team synergy that simply did not matter in the industrial era of individually concave labor. Morale, team synergy, and motivation used to be “nice to haves” that often weren’t especially necessary. They were rarely judged to be worth their cost. With concave labor, productivity can’t be improved very much over the typically achieved state, so management’s goal becomes to get things done cheaper, not better.
Concave human labor is going extinct. Model a task’s payoff as M * p(q), where M represents the maximum potential yield, p is a logistic “performance function” between 0 and 1, and q is a measure of inputs (skill, effort, talent, motivation) that we’ll leave abstract but assume to be measurable. With concave work, p(q) is greater than 0.5 for typical values of q. This means that we know what M is and, for predictable economic reasons, it’s usually low. We can define “perfect work” and management’s job is to track and reduce error. If the average error rate is 5% (that is, p(q) = 0.95) then management might warn the people above 10% and fire the ones above 15%. Concavity usually means that we can define perfect work and specify it, and thus we can usually have machines do it with p(q) very close to 1 and costs extremely low. If the technology to do so doesn’t exist now, some machine learning researcher is out there working on it. Driving is just one example of concave labor that will almost certainly be automated in the next couple of decades. Much of medicine will be automated in the same way. There will, probably as long as there are humans, be people called “doctors” whose job it is to understand, monitor, and communicate the purposes of the machines– that work will be always be convex– but I doubt that we’ll have physical surgeons in 2250.
There is, however, a different class of labor where M is unknown, because almost no one has achieved a level near it, and “low” values of p (below 0.5, and possibly well below it) are acceptable. For example, getting a 1% share of a $10 billion dollar market is no small achievement. Generally, for hard jobs where p(q) is low for observed values of q, M is going to be unknown, but very high. Since the logistic function p is exponential at p(q) << 0.5, the result is an input/output curve that is, over all relevant values of q, an exponential growth function: highly convex. There is a “levelling off” point somewhere where the exponential behavior stops and saturation sets in, but that’s tomorrow’s problem if p(q) << 0.5. The essence of convex labor is that no one can specify what “perfect completion” is, because the maximum potential hasn’t been found yet.
Cost- and risk-reducing management strategies generally fail over convex labor, because of their effects on morale and communication, to say nothing of the attrition of talent as the most capable leave for other jobs. In the individually concave, industrial-era, world those aspects of work were treated as “marginal” factors that have small effects on q. For concave work, these “soft factors” were subordinate to the hard concerns of cost minimization. However, for convex labor, small margins in q matter. That is the nature of exponential functions: small margins in input produce large variances in output.
Trust sparsity is devastating when an organization is engaged in convex work, because people who do not feel trusted by management, and who do not believe their bosses will be fair to them, are never going to perform at the highest levels. That’s why trust-sparsity fails amid convexity in individual work. You don’t lose 5% (as you would, for concave work) when you treat employees like pure subordinates; you’re lucky if you get 5%. Moreover, it also falls down when it comes to synergistic team-building. With trust-sparsity, communication problems are commonplace and formation of appropriate teams to solve complex projects is not possible. That’s the moral of the “stone soup” parable discussed in Part 19: the trust-sparse and famished village is saved by outsiders who use a nonexistent delicacy to create the trust density that enables them to make a genuine meal. In a trust-sparse environment, the only way to get team convexity is to use deception.
The original formulation of the MacLeod process is that it’s a financial risk transfer between entrepreneurial, hard-working Sociopaths and risk-averse Losers, with the Clueless brought in to intercede and hold up a veil (Effort Thermocline) once the Sociopaths become complacent. I’m not sure that that’s quite right. If nothing else, an honest risk transfer (in which the party exposed to less risk enjoys less reward) is certainly not sociopathic. It’s how finance works. Rather, I think the infusion of bad Sociopaths (true psychopaths, as a psychologist would define it) comes later. Trust sparsity and clique formation mean that people need “protection” because isolated points (single-node cliques) are going to be flushed away. Personal trust is hard for a dysfunctional corporation to encourage, so they replace it with an impersonal, official organizational trust– credibility– in the form of job titles, project allocations, and special awards. Even though credibility’s supposed to be strictly a reward for performance, an internal market for that forms. The vulnerable start panic trading and become MacLeod Losers, and the ones who put themselves on the other side of those trades become an increasingly powerful (and credible) Sociopath tier. Useful idiots who still believe in credibility and Santa Claus become the Clueless. That is exactly how an organization is handed over from “good Sociopaths” (Technocrats) to the bad ones. It all starts when simple trust disappears and, as I said, it’s a systemic and binary property.
Your organization is probably failing
Simple trust does not mean “trust everyone, all the time, on everything”. Database servers should be backed up. It also doesn’t mean one should put oneself in a situation where a single “bad apple” can ruin the company. That’s where Theory Y went wrong. It was too liberal with trust to be practical, and the uptick in economic inequality that began in the 1980s exposed organizations to an onslaught of bad-faith actors. Trust density is extremely important, but it needs to be defended with enough sobriety that it still makes sense.
Simple trust also doesn’t mean that you trust people stupidly. People who prove themselves undeserving of trust must be fired, lest they threaten the prevailing trust density. This is why so-called Performance Improvement Plans are such a terrible idea. Once an employee has been deemed a bozo, separate. Carrying out a kangaroo court for two months while a “walking dead” employee poisons morale, just to save chump change on severance payments, is idiotic. You’re not rewarding failure when you write a severance check; you’re reaching an agreement that is mutually risk-reductive.
Rather, what simple trust means is that people who are hired are held to be competent and capable of doing good work, and generally permitted to work for the company directly. The issue with management in a trust-sparse environment is that it quickly realizes that it has the power to reduce a person’s credibility to zero. Given the inherent conflict between project and people management, this typically results in middle-management extortions that leave the company unable to evaluate projects properly and, therefore, without a strategic rudder, they generate lots of fourth-quadrant work. Technically speaking, it’s rare for a manager to be able to explicitly fire someone (too much legal risk) but a mean-spirited performance review system and competitive internal market for transfers have, in essence, the same effect.
Until about 20 years ago, it was very rare that performance reviews were part of an employee’s transfer packet. If they were, managers would give a glowing review on paper and a realistic review, verbally. Taking care of the employee’s long-term career interests showed a commitment to mutual loyalty. What changed? Enron. No, I’m not talking about the 2001 revelation of accounting fraud that made it one of the most spectacular corporate failures in history. Rather, I mean to focus on its stirling reputation in the 1990s as one of the most innovative companies in existence. Enron made tough culture an art form, and was lauded for doing so, because this was a time when even many liberal Americans thought that “lazy people” were getting too many breaks. Some of the innovations that came out of the Enron-style performance review systems were:
- Performance reviews are numerical and forced to comply with a pre-defined distribution. It’s a zero-sum game. For one person to get a good review, another must get a bad one. Performance reviews also contain written feedback and direction, but the only thing that actually counts is “the number”. It determines compensation, promotion opportunities, and internal mobility. Because review points are a scarce resource, in-fighting and horse-trading among managers is an invisible but potent force directing one’s career.
- A full review history is part of the employee’s transfer packet. This is the distilled essence of toxic trust sparsity. It means that employees are no longer trusted to represent their contribution to the company when they seek internal mobility. Their personal credibility is reduced to zero; all that matters is the “objective permanent record” they carry with them.
- “Passive firing”. Instead of an employee being fired by a direct manager (legal risk) his reputation is demolished in the review system, and he may not even be aware of it. This is how passive firing works: when the position on his project is cut, he’s technically permitted to interview for transfer, but immobile because of the smear. I don’t think that passive-firing systems reduce true legal risk, because damaging an employee’s reputation over months is much more damning than simply terminating him. I think the purpose is for the series of rejections (which are never explicitly connected to the bad reviews) to break the employee’s will and discourage the lawsuit from happening in the first place.
- Invisible career tracking. In tough culture, it’s not enough to do well by one’s boss. One also has to have a powerful manager. What that means is that there are well-known “good” and “bad” teams to be on, and everyone important knows which are which, but an outsider considering a job can’t tell based on the work alone. No matter how hard an employee works on one of the “bad” teams, it won’t matter because he’ll never be able to move to a “good” one. He’ll be competing against people who are moving from one good team to another and whose managers had more power.
- Welch Effect. The Welch Effect refers to the idea that the people most likely to lose their jobs are junior members of macroscopically underperforming teams (who had the least to do with said underperformance). In tough culture, that’s true, although I’d replace “underperforming teams” with “team with politically unsuccessful managers”. Often, managers who have fewer review points to give out will tend to sacrifice a team member or two in order to make decent scores available to motivate the more senior members, who become increasingly frustrated with their lack of advancement. (This loyalist effect is a major part of what turns tough cultures back into rank cultures.) The high turnover on the “bad” team, however, exacerbates its negative reputation and performance problems.
We know what happened to Enron, and there’s little doubt in my mind that the internal horse-trading surrounding credibility and performance review scores is intimately connected with its externally visible ethical problems. Tough-culture review systems generate a “truth” about personnel that is, in fact, subject to so much power-playing and dishonesty as to make a mockery of the concept. This kills off any respect for truth as an idea, and leads to a corrosion of ethics. It doesn’t always result in defrauded investors, but it’s always corrosive. Microsoft’s similar system is credited for more than a decade of mediocrity. Google has the same system, but with a brilliant twist: the “calibration scores” are secret! Google is the only company that I’ve encountered where a manager can spontaneously give positive verbal feedback and secretly black-list a report. Google has a silent problem of managers doing exactly that to keep people captive on undesirable projects.
So, look around your company. Are performance reviews part of the transfer packet? If so, you live amid trust sparsity. The company has decided that most of its employees are bozos, and it has decided to label them as such, as a favor to management. It doesn’t trust you to represent your contribution and abilities.
Solving It
Sparsity of simple trust is devastating. Because the “bozo bit” becomes a global artifact, companies can be observed to “flip the switch” as a collective. Lose it, and you’ll almost certainly lose your corporate culture. At that point, it doesn’t matter if you’re a bank or a tech company or an advertising firm. You no longer have a good place to work. Innovation and genuine teamwork will end, except in hidden corners of the company, and all you have to look forward to is a twilight of zero-sum squabbling over credibility, a behavior that will linger on until the damn thing finally dies. So trust density needs to be a pillar of the organization if you want it, ten years later, to be something worth caring about.
What usually goes wrong? In startups, it seems to be rapid growth, but I don’t think there’s an intrinsic growth limit– some magic number like “25% per year”. I think what kills the bulk of these companies is that they hire before they trust, without failing to comprehend the permanent loss that this inflicts upon them. That is true sociopathy, because it means that people are brought into the venture with the express purpose of putting them in a miserable, subordinate position. It’s also bad for almost everyone, because it creates insecurity in the rest of the labor force: being there no longer makes a person credible, so there’s a race for elevation as the company prepares for its split into two classes: the real members, and the bozos.
To hire before it trusts is not a short-term expediency. It’s the one thing that a company really can’t afford.