A March banking crisis that touched every venture-backed company we work with, and a five-day governance crisis at OpenAI in November — November 17th through 22nd — that may turn out to be the most consequential days the technology industry has experienced since the iPhone launch. This letter is about both, and about what it means to build through them.

On the Silicon Valley Bank Weekend

Silicon Valley Bank failed on Friday, March 10th. Over the following sixty hours, almost every venture-backed company we work with confronted the operational possibility that its primary deposit account would not be accessible on Monday morning. We spent most of the weekend on the phone, mostly listening. By Sunday night, federal regulators had announced that depositors would be made whole; by Monday morning, the immediate crisis was resolved.

The substantive lesson is not about banking. The lesson is about how thin the operational redundancy of the venture-backed economy actually is. Almost every company we work with had concentrated its operational cash at a single counterparty because the counterparty had built its product around exactly that need. The convenience of a single point of failure is the convenience of a single point of failure. We are encouraging diversification across our portfolio. The recommendations are not novel; they are the kind of operational hygiene that any treasury function would have implemented at scale. They were not implemented at scale because the convenience-loss of doing so had been priced higher than the tail-risk of not doing so. The weekend reset the pricing.

We will note, for the record, that the regulators' decision on Sunday night to guarantee uninsured deposits was the correct decision and that we are grateful for it. We will also note that the precedent it set will need to be addressed in the next administration's banking-policy framework, and that the eventual resolution will likely be less generous than the March 12th announcement implied. Companies should plan accordingly.

On the Five Days That Reset AI Governance

On Friday, November 17th, the board of OpenAI removed its chief executive. By Monday, the chief executive had been hired by Microsoft. By Tuesday, the OpenAI staff had threatened to follow him. By Wednesday, the board had reversed itself. The arrangement that emerged is materially different from the one that existed before the weekend, and the implications for how the most important AI companies will be governed over the next decade are not yet fully digested.

We do not have exposure to OpenAI directly. The implications, however, run through every AI company we have funded. The lesson, in our reading, is that the institutional structures we have inherited from the previous decade of internet companies are inadequate for the governance pressures that frontier AI development will create. New structures will need to be invented. The governance of OpenAI before that weekend — a non-profit board governing a for-profit subsidiary, with mission-aligned controls that were tested for the first time and found wanting under the pressure of staff loyalty and capital deployment — was an unusual structure even by AI-industry standards. The structure that has replaced it is closer to a conventional commercial company than to the experimental non-profit it descended from. The shift is, we believe, almost certainly permanent.

The harder question, which our framework will need to address over the next several years, is whether comparable governance pressures will arrive at the other frontier-AI companies, and whether their existing structures will hold under similar tests. We are watching closely.

On What ChatGPT, One Year On, Made Real

ChatGPT was launched at the end of November 2022. By year-end 2023, it has produced more measurable behavior change among knowledge workers than any other software product in our memory. The acceleration in productivity for certain categories of work — drafting, summarization, code generation — is the largest discontinuity in white-collar work that any of us have witnessed.

The companies that will compound from this discontinuity are not, in most cases, the companies building foundation models. They are the companies building the workflows that the foundation models make possible. Our 2023-2024 vintages will be heavily weighted toward this layer. We have made nine commitments this year to companies whose products integrate language models into existing professional workflows; in seven of the nine cases, the companies were founded after January 2023, and the founders are operating with intuitions about model capabilities that founders from earlier cohorts have not yet developed.

The deeper observation, drawn from the year's deployments, is that the productivity gains from current-generation models are concentrated among the workers who were already most productive in their categories. The least productive workers, in our experience, do not benefit from current-generation models because they cannot evaluate the models' outputs. This has implications for which categories of work the models will most disrupt; the most-impacted categories will be the ones whose value-add was concentrated in the most productive workers, while the least-impacted will be the ones whose value-add was distributed evenly across worker quality.

On the Companies That Got Faster Because of It

Several of our portfolio companies, this year, shipped product at velocities that would have required twice the headcount in 2022. The productivity differential is durable. The companies that internalize it earliest will be the ones that compound. We have seen the differential most clearly in companies whose engineering organizations are most fluent in the new tools — not the companies with the most aggressive AI strategies in their executive communications, but the companies whose individual contributors have integrated the tools into their daily work without ceremony.

The pattern matches a pattern we have seen in prior technology adoptions: the companies that benefit most are the ones whose adoption is bottom-up rather than top-down. Top-down AI adoption strategies tend to produce slide decks; bottom-up adoption tends to produce shipped product. The strategic difference is consequential and we are weighting it in our diligence.

A Closing Note

Building through a year like this one is not the same as building. It is the test case for which our discipline was constructed. The 2024 letter will write about the year that follows.

The Partners
Winzheng Family Investment Fund · December 2023