Quarter-Inch Holes

AI Disruption
BY
Chris Brown
,
The Inspired Team
Principal
February 1, 2024
Quarter-Inch Holes

In 2005, Clay Christensen and Scott Cook wrote an article called “Marketing Malpractice: The Cause and The Cure.” They argued that companies tend to focus too much on creating narrow variations of existing products versus listening to customer needs. The rallying cry for the article was a famed quote by Professor Theodore Levitt: "People don't want to buy a quarter-inch drill. They want a quarter-inch hole!"

At one point, the authors expand on Levitt’s quote:

“Every marketer we know agrees with Levitt’s insight. Yet these same people segment their markets by type of drill and by price point; they measure market share of drills, not holes; and they benchmark the features and functions of their drill, not their hole, against those of rivals. They then set to work offering more features and functions in the belief that these will translate into better pricing and market share. When marketers do this, they often solve the wrong problems, improving their products in ways that are irrelevant to their customers’ needs.”

Christensen and Cook were talking about traditional consumer goods offered by brands like Starbucks and Procter & Gamble. But I’ve been thinking about how well their writing describes what I’m seeing in my corner of the internet.

Over the last decade, we’ve churned out what seem like endless permutations of application layer software tooling, typically seat-based, ultimately getting us to a point where meme-able market maps like this exist. This circumstance was created by decreased barriers to entry (limited technology risk), coupled with a dramatic increase in investor appetite for the as-a-Service delivery mechanism. Annually-contracted, seat-based growth was predictable, and these businesses tended to have excellent financial profiles when run well. At one point, the Bessemer Emerging Cloud Index had a market capitalization of $3T, which is equivalent to the gross domestic product of the world’s fifth largest economy.

But supply and demand dynamics rule our universe, two of which are starting to seem obvious:

  1. The supply of SaaS offerings is outstripping buyer demand, resulting in a market structure that is eroding theoretical business quality in zero-marginal-cost-of-distribution land. More products can be great for buyers. But not for founders or investors. Competition reduces pricing power and increases customer acquisition costs (obviously). We can look to public markets as an example, where few companies are hitting the benchmark of sales & marketing at 20% of revenue. For most, meeting growth expectations just means scaling marketing spend or sales headcount somewhat linearly alongside revenue.

The influx of dollars into venture coupled with the perceived fundability of as-a-Service tooling has our current capital cycle unnaturally stuck in the right-hand quadrant of this chart:

  1. Selling tools worked best in the past. Going forward, there will be more demand — and more budget — for a completed job. I don’t think we’re in a “SaaSpocalypse” as some have suggested. That seems hyperbolic. But by and large, venture dollars invested into B2B software over the last decade were wired to bank accounts of companies building tools that charged customers per seat and described their value prop using some combination of the following phrases: efficiency, collaboration, productivity. When it comes to these types of products sold in this particular way, the list of venture-scale tools that should exist but do not is getting shorter. This is abstract, so let’s look at an example in the software QA (quality assurance) market. Generally speaking, it’s important to monitor the quality of code as we feed the world’s most critical industries to software. So why isn’t there a giant SaaS business supporting the QA function? In a seat-based pricing world, the market for test outlining software just isn’t very large (at least in venture speak). Perhaps low single-digit billions depending on your assumptions. However, the market for QA services is ~$40B. Companies pay agencies, consultants, and freelancers tens of billions of dollars per year to write and maintain tests for them. That’s an enormous spread between SaaS TAM and services TAM in the same market. What does this tell us? Companies don’t want to pay $500/month/QA engineer for a tool to help them write tests more efficiently. They want coverage. They want to maintain their product velocity without worrying about quality. Adding another tool to the mix doesn’t achieve that. But the implied job-to-be-done is only addressed by professional services companies that don’t fit the desired specs of a venture-backed business.

“Outcome Products” Enter the Canon

Somewhat fortuitously, as the above dynamics reached their apogee, large language models (“LLMs”) became available to the mass market via ChatGPT in November 2022. The tech world spent the ensuing 14 months deliberating on how generative AI will impact competitive dynamics in software. Most of these questions are still outstanding.

But one thing has become clear to me after a year and half of meeting teams tinkering with new ideas.

We are now at a point where the technology at our disposal allows us to build products that do things for people. It’s easy to write this off as a silly or simple statement. But it would represent a distinct departure from the “sell-a-tool-per-seat” era. If LLMs enable services-oriented businesses historically reliant on human labor to operate with greater efficiency by trading labor COGS for inference COGS, it may open up gross profit pools for venture-backed entrepreneurs that were not previously accessible. Put more simply, it could unlock a new kind of venture-backable startup.

Some folks I respect have started to write about this, too. Sarah Tavel published a great piece called “Sell work, not software.” More recently, Roddy Lindsay wrote an op-ed in The Information titled “How to Build an AI-Enabled Services Company.” I expect more to do so, because founders are just starting to test this “Outcome Product” business model across industries and use cases.

At the risk of oversimplifying, we’ve seen three archetypes of business that fit this description.

Agentic Products (“Find the right info, interpret, take action, repeat”)

These are software systems that complete their own OODA loops, culminating with some type of action taken at the end of a flow before beginning again. (Credit to my friend Ben Cmejla for this terminology).

In 2022, Inspired led an early round for a business called QA Wolf attacking the market opportunity in QA that I described above. QA Wolf is building an agentic product that sells a result — they get customers to 80% test coverage in weeks by automating the creation and go-forward maintenance of testing for web and mobile apps. To the customer, QA Wolf is a service provider with deeper product integration than one would usually expect in a services relationship. QA Wolf is able to provide this outcome at software-like margins because of the automation they’ve built to assist with test creation, maintenance, bug identification, and subsequent fixes.

The margin structure for agentic products will depend on how complex of a job they are performing for a customer. You can imagine that some problems out there are “GPT-4 grade.” They are addressable right now, require limited to no humans-in-the-loop, and can leverage existing LLMs at reasonable cost. Other problems will require more model calls or larger prompts. It’s going to be interesting to see what products benefit from the release of GPT-4+ level models. There are varying degrees of confidence on the speed at which models will advance, but regardless of the pace, each inch in that direction is “TAM expanding” for agentic products.

The businesses best hedged against the timeframe of model advancement are those to whom humans-in-the-loop are inherent to the customer interaction. I think QA Wolf is a great example here. Pilot, the bookkeeping service, would be another. Their workforce of accountants could in theory leverage whatever degree of LLM usage is affordable for the company at a given point, and then increasingly rely on models as it made economic sense to do so. Eventually, these businesses may watch the responsibility of employees move away from the work they were hired for and functionally morph into prompt engineers. Perhaps the day-to-day of an accountant at Pilot in 2026 looks unrecognizable from that of an independent accountant in the market. (We are not investors in Pilot).

Search Only Products (“Find the right info, enable me do the rest”)

These businesses retrieve and synthesize information for a user but tend not to take action. They find a way to build a dataset on a particular topic, make it easy to immediately query, and then enable a user to perform a task. These products perform best in areas where mission-critical information (1) exists in volume that would be impossible to keep in the human mind or (2) lives in disparate places.

Nikhil Krishnan just wrote a great post on InPharmD, which is an interesting example of this type of product in healthcare.

Another example is a company called Revv in the auto industry. As cars continue to evolve from “dumb” machines into computers on wheels, the amount of information required to service them is overwhelming. There are hundreds of thousands of combinations of make, model, and trim on the road today, all with different sensor SKUs and locations. A 2024 model car may have over 100 sensors! This makes it challenging for mechanics and technicians, who as a labor force are still getting familiar with advanced driver-assistance systems (“ADAS”), to know exactly what to do when they see an Audi with a shattered windshield, a Volvo with a bent fender, or a Ford with dented roof. You don’t have to squint to see how querying a database of aggregated repair manuals could be quite valuable to a repair shop, especially if they want to take on more of this work — which tends to be higher margin than mechanical repairs — but feel incapable of doing so today.

We’re not investors in either company, I’m just a fan of both.

Vertical datasets like this — hard to build yet finite in scale with moderate go-forward maintenance — play beautifully into the current pricing structure of LLMs. Reasonable inference costs for the win! I expect business models for “search only” products to be largely dependent on what specific action they enable a customer to perform, as selling information alone is challenging. In both examples above, it’s not unreasonable to believe that these products could argue for a take rate on incremental business driven. That would be distinct from traditional vertical software monetization.

Hardware-enabled

One of the byproducts of collective groupthink over the past few years was an opinion that hardware products were, at least compared to software, bad venture investments. But if we are indeed entering an Outcome Products era, then hardware will undoubtedly play a role. Some jobs-to-be-done require atoms as much as bits.

Inspired recently invested in a robotics business that assists institutional land owners with the collection of forest inventory data. Today, $10 billion is spent annually to do this with tape measures and clipboards! If someone entered this market trying to sell SaaS — despite the fact that customers currently store their information in 1990s-era software — it would be like running up against a brick wall. The market is simply not a motivated buyer of software, mostly because software alone doesn’t solve their problem. But by selling a result — in this instance, robotic-assisted asset inventories that can be used to manage annual audits — the market (and all subsequent software opportunities within it) can be cracked open.

Cost reduction in off-the-shelf sensors enables this business. There have been a handful of venture-backed businesses that have used an off-the-shelf hardware wedge to break into markets — Flock Safety and Verkada come to mind. What I’m excited to see is the LLM “expansion into hardware.” Specifically, how LLMs can play a role in reducing the time and cost of bringing more sophisticated hardware products to market, especially vertical autonomy that works well in unstructured environments. Five years elapsed between the 2017 paper that introduced us to transformer architecture (Attention is All You Need) and the release of ChatGPT in 2022. The authors of this paper went on to found some some of the big-name startups in AI today. Over the past twelve months, there has been a flurry of interesting papers exploring the application of LLMs in robotics. Perhaps one of these teams becomes the Attention is All You Need mafia for hardware. Cheaper, quicker paths to market for robotics would be a big enabler of this business model.

Wrapping it Up

If you think through the list of business interactions that would be a good fit for Outcome Products, it's expansive. Roughly 13% of the U.S. GDP is driven by companies selling a service or expertise rather than a manufactured product. That doesn’t even include the global BPO or IT outsourcing industry, which in India alone is expected to be somewhere around $350B by 2025. LLMs won’t be able to make all of this yield to efficient economics. But a lot of it seems within grasp. Where else is there a QA-industry like spread between seat-based TAM and the revenue pool of service providers?

I’m super excited to see more “quarter-inch hole” businesses. Prodded by newfound capabilities, the way we think about building, pricing, and packaging software products feels like it is going through a phase change.

If you’re working on something in this space, I’d love to say hello – I’m chris@inspiredcapital.com.