Venture capital is a very risky business. By my math, the chances that an unfunded startup ends up being worth at least $1 billion are one in five million. And the typical venture capitalist expects to get one big hit from every 10 investments. In short, given the risk, investors are highly unlikely to write checks to a company that proposes to limit its upside on one big hit.

But this week, Silicon Valley was shocked to learn that Sam Altman, who ran Y Combinator -- one of Silicon Valley's most successful startup incubators -- was leaving to take over as CEO of a for-profit company that will give all the profit it generates above a certain amount to a nonprofit that governs it, according to Recode.

The nonprofit, which researches artificial intelligence, is called OpenAI, and it's being effectively replaced by the new "capped-profit" company, OpenAI LP.

If investors put money into OpenAI LP, they can receive at most $100 back for every $1 they put in. Any profit above that amount will go the nonprofit, which will use it in accordance with its charter. The charter is based on four principles: "broadly distributed benefits, long-term safety, technical leadership, and cooperative orientation."

This idea strikes me as terrible for three reasons:

  • It's superficial. It is a wrapper that makes potential investors think that they will be able to make 100 times their money and then brag that everything above that amount is being donated to a philanthropic cause. However, given the nature of venture capital investing, the odds of making that high of a return are very small.
  • It's poorly thought through. It sets up an inherent conflict between the for-profit and nonprofit organizations and does not offer well-tested evidence that this conflict can be resolved effectively. Simply put, even if OpenAI generates a return over 100x, it is not clear how the extra money will be invested. Why not just give the extra money to the investors and let them decide which philanthropic causes to fund?
  • It's hypocritical. Rather than creating two competing organizations, the values of the for-profit company should incorporate social benefits. The idea behind OpenAI seems to be that caring about social benefits is inherently in conflict with making money, but I believe the opposite is true.

That is, it's true if you take a long-term view of the role of a company in society. If a company believes and acts according to the notion that it ought to make employees, customers, and communities better off in the long run, then making money and making others better off are mutually reinforcing.

The idea behind this belief was explored by Fred Reichheld, an emeritus partner of consultants at Bain & Co, in his book, The Loyalty Effect. Reichheld showed that happy employees lead to happy customers, which in turn results in higher profitability -- which pays off for shareholders.

The link between loyalty and profitability becomes clear when you consider its opposite. If you treat employees badly, they will take out their unhappiness on your customers. And those customers may try your product once, but after getting a dose of your unhappy employees, they'll find another vendor.

To meet its sales goals, that disloyal company will need to spend more in marketing to bring in new customers. Ultimately, that transactional, short-term view of relationships between employees and customers ends up wasting money due to high turnover of employees and customers.

If a company treats its employees well, they will devise new ways to keep customers happy, so they'll stick around and keep buying -- thus boosting the lifetime value of the customers.

Were OpenAI to use this approach to manage itself, it would never be tempted to pursue short-term, anti-social business strategies, because they would damage its relationships with employees, customers, and communities.

To put teeth into this idea, OpenAI would appoint an independent board of directors, which would hire a CEO with a proven track record of leading organizations that grow profitably by applying these principles.

What's more, the board would evaluate proposals to use capital for growth based on specific criteria -- e.g., approving proposals that would most benefit employees, customers, and communities -- which are consistent with these principles. 

Moreover, were the CEO to act in ways that were inconsistent with these principles, the board would have the power to replace the CEO.

Capping investors' returns will never get OpenAI where it wants to be, since it actively separates the profit-seeking and nonprofit aims. Companies that want to promote the social good should try the Loyalty Effect instead.