When Scott Hsu joined Contrary Research as a Senior Research Fellow, he ran into a problem. Writing a deep-dive on a company like Databricks or Pinecone meant you couldn't hide behind product enthusiasm — you had to explain, precisely, why the business works. Who pays, how much, why they don't leave, and what kills the company if one assumption is wrong. That discipline, applied across more than a dozen AI companies, gave Scott a framework that most AI commentary lacks: he reads these companies as businesses first, and technologies second.
By day, Scott is a Technical Program Manager working directly with the CTO at Rippling, one of Silicon Valley's fastest-growing enterprise software companies — which means he also evaluates AI vendors from the inside, with real procurement decisions on the line. His company breakdowns have been published in Meet Startup, Taiwan's leading startup media platform, and Economic Daily News. We sat down with him to understand how he thinks.
You've written breakdowns of over a dozen AI companies. Take us back to where that analytical discipline actually came from.
Contrary Research forced a specific standard. You couldn't write 'this company has great AI' and move on. You had to answer: what is the actual business model, what does the competitive landscape look like in three years, and what has to be true for this company to win? If you couldn't answer those questions precisely, you weren't done.
That framework changed how I see everything in the AI space now. Most AI coverage treats the technology as the story. I became much more interested in the business underneath — who owns the customer relationship, what does the cost structure actually look like at scale, and what happens when a well-funded competitor shows up. Those questions don't get asked enough, and the answers are usually more interesting than the demo.
When a new AI company gets buzz, what's the first thing you look at?
Distribution before product. The question isn't whether the AI is good — it's who already owns the customer, and whether this company is building on top of that relationship or fighting against it.
Most AI companies are entering markets where incumbents already have the trust, the contract, and the embedded workflow. The companies that break through tend to own one specific entry point so completely that they become the default before anyone has time to respond. That's a go-to-market insight more than a technology one. In most of the companies I've studied, the AI itself ended up being the commodity. The distribution was the actual product.
What separates durable AI businesses from ones that get commoditized?
I keep coming back to three patterns, and I've found a clear example of each in companies I've written about.
Harvey AI taught me the value of workflow depth. Harvey isn't a legal chatbot — it's embedded in how lawyers at top firms actually work: drafting, due diligence, contract review. When AI becomes structurally load-bearing inside a professional workflow, switching is genuinely costly — not because users are lazy, but because retraining and re-validating outputs in a regulated environment carries real risk and real time. Harvey understood early that the product wasn't the model. It was the integration. That's a fundamentally different moat than a general-purpose assistant.
Cursor showed me how data flywheels actually compound. Every accepted suggestion, every rejection — that's signal. Across millions of developer interactions, Cursor is building proprietary understanding of how real engineers write real code that no competitor can replicate by training on public GitHub data alone. The question I ask every AI company now: does using the product make the product meaningfully smarter, in ways that compound faster than a well-funded competitor can catch up?
Palantir is an instructive case study in switching cost architecture as a deliberate design choice. Their software doesn't just sit on top of customer data — it becomes the decision-making operating layer for the organization. Once that's embedded, switching isn't just a technical migration. It's organizational and contractual. Worth studying as a design principle regardless of what you think about the company.
What surprised you most across everything you analyzed?
How thoroughly revenue growth has decoupled from defensibility. Lovable hit $17 million ARR within months of launching. Mercor crossed $100 million in about two years. By any historical standard, those numbers are remarkable.
But fast ARR right now often reflects a category tailwind more than a durable moat. Enterprises and developers are in an experimentation phase — buying broadly, and that's inflating early adoption numbers across the board. What I've noticed is that AI tools actually have lower behavioral switching costs than most software. People change habits faster than anyone expected. The productivity gains from switching to a better tool are immediate and obvious, which means the switching friction that traditional SaaS relies on — data migration, retraining, workflow disruption — is much weaker here. The companies that survive aren't necessarily the ones with the best technology today. They're the ones building structural lock-in fast enough to stay ahead of the next thing. Most of the AI companies generating headlines right now are racing against that clock without fully acknowledging it.
What do most people misread about AI company valuations?
They apply SaaS valuation frameworks to businesses with fundamentally different cost structures.
Traditional software had near-zero marginal cost — you built the product once and served the next customer essentially for free. AI companies look like software companies on the surface, but inference isn't free, retraining is expensive, and many of these businesses have human-in-the-loop quality layers that don't show up cleanly in headline gross margin numbers. From my analytical read of Harvey's model, a meaningful part of their defensibility comes from legal domain expertise embedded in their fine-tuning — and maintaining that kind of specialized knowledge has a real ongoing cost that generic AI providers can't simply absorb.
The market will eventually demand unit economics clarity. The companies that have been building sustainable margins quietly will look very different from the ones that optimized purely for top-line growth.
Are there categories of AI companies you're particularly bullish or cautious about?
Infrastructure, clearly bullish. The picks-and-shovels layer — compute, orchestration, data infrastructure — has a structural advantage I find hard to argue against. Databricks, which I covered at Contrary Research, sits in that layer. The thesis is simple: it doesn't matter which applications win the next cycle. The infrastructure underneath has to grow regardless. That's a more durable position than betting on any single application.
My caution is concentrated in two places: standalone AI assistants and application-layer SaaS that wraps general-purpose AI capabilities without a genuine moat underneath. The existential risk for these companies is straightforward — every time a foundation model provider ships a meaningful capability improvement, the application wrapper's value proposition shrinks. If your entire business is 'we use AI to do X,' and OpenAI or Anthropic can do X natively next quarter, you have a structural problem. The application companies I find genuinely interesting are the ones with distribution advantages or proprietary data the foundation models can't absorb. Everything else is building on borrowed time.
On geography — I think the honest answer is that frontier AI development requires an extraordinary concentration of three things simultaneously: deep capital markets, a large domestic market to sustain growth, and world-class technical talent. The US has all three. China is the only other market that comes close — not because of any single factor, but because the combination exists at sufficient scale. Domestic security imperatives create sustained demand that doesn't depend on Western capital or distribution. Every other market — Europe, Southeast Asia, Taiwan — has genuine talent, but not the same convergence of scale across all three dimensions. That's not a critique of the talent in those ecosystems. It's a structural observation about what it actually takes to build and sustain a frontier AI company.

Source: Unsplash
What's one thing you want people to take away from your analysis?
That the technology is almost never the interesting question. The interesting question is always the business underneath.
When I evaluate any AI company now, I'm asking: who owns the distribution, what does the cost structure actually look like at scale, and what would have to be true — specifically — for the moat to hold in year three? Those questions sit at the intersection of business strategy and technical understanding, and that combination is genuinely rare. Most engineers don't naturally think about unit economics and distribution. Most business people can't evaluate whether an AI architecture is defensible or just well-marketed.
The single most clarifying question I've found — one that works whether you're an investor, a buyer, or just someone trying to understand the space — is simply: what stops a well-resourced competitor from replicating this in eighteen months? If the honest answer is 'not much,' that's a very different company than one where the answer involves proprietary data, embedded workflows, or structural distribution advantages that took years to build. The best AI companies are building toward the second answer. Most aren't there yet — and the market hasn't fully priced that in.
Scott Hsu is a Technical Program Manager at Rippling and a Senior Research Fellow at Contrary Research. His company breakdowns have been published in Meet Startup, Taiwan's leading startup media platform, and Economic Daily News, covering companies including Cursor, Harvey AI, Perplexity, Mercor, Lovable, and Anduril.
Comments
Loading comments…