Reputation for reliability, stability, or any other desired dimension.
Constant visibility in the news (good, neutral, sometimes even bad!)
A consistent attractive story or narrative around the brand.
A consistent selective story or narrative around the brand. People prefer products designed for "them".
On the dark side: intimidation. Ruthless competition, acquisitions, law suits, reputation for dominance, famously deep pockets.
To keep someone is easier. Tiny things hold onto people: An underlying model that delivers results with less irritation/glitches/hoops. Low to no-configuration installs and operation. Windows that open, and other actions that happen, instantly. Simple attention to good design can create fierce loyalty, for those for whom design or friction downgrades feel like torture.
AI-based product that slips past the defenses of people who think they hate AI, get turned off by branding like Copilot + PC, etc. A lot of people are really hoping it all dries up and blows away the way NFTs did.
Or maybe the honest to God non-dull tool that has nothing to do with AI. Like a Photoshop clone that does everything in linear light, makes gorgeous images, and doesn't crash when you open the font chooser.
The tricky thing about "data is the only moat" is that it depends heavily on what kind of data you're talking about.
Proprietary training data for foundation models? Sure, that's a real moat - until someone figures out how to generate synthetic equivalents or a new architecture makes your dataset less relevant.
But the more interesting moat is often contextual data - the stuff that accumulates from actual usage. User preferences, correction patterns, workflow-specific edge cases. That's much harder to replicate because it requires the product to be useful enough that people keep using it.
The catch is you need to survive long enough to accumulate it, which usually means having some other differentiation first. Data as a moat is less of a starting position and more of a compounding advantage once you've already won the "get people to use this thing" battle.
Building in a niche B2B space and this resonates. The data moat isn't just volume though - it's the accumulated understanding of edge cases.
In my domain, every user correction teaches the system something new about how actual businesses operate vs how you assumed they did when you wrote the first version. Six months of real usage with real corrections creates something a competitor can't just replicate by having more compute or a bigger training set.
The tricky part is that this kind of moat is invisible until you try to build the same thing. From the outside it looks simple. From the inside you're sitting on thousands of learned exceptions that make the difference between "works on demos" and "works on real data."
We totally found this doing financial document analysis. It's so quick to do an LLM-based "put this document into this schema" proof-of-concept.
Then you run it on 100,000 real documents.
And so you find there actually are so, so many exceptions and special cases. And so begins the journey of constructing layers of heuristics and codified special cases needed to turn ~80% raw accuracy to something asymptotically close to 100%.
That's the moat. At least where high accuracy is the key requirement.
In case you haven't come across the idea yet, this concept is all the rage among the VC thoughtbois/gorls. Not sure if Jaya Gupta at Foundation coined or just popularized it but: context graph.
Could be a good fundraising environment for you if you find the zealots of this idea.
Why is it that we have agents that can prospect for sales leads and answer support tickets accurately, but we don’t seem to be able to consistently generate high quality slides?
I don't know about prospecting, but "answer support tickets accurately"? Seriously, this must be ironic, right?
It's great to hear you've already tried X twice. But have you tried reading our FAQ section on X? Also, try using this setting that doesn't exist or this dialog that was removed in 2022
Efficiency will ultimately decide if LLMs become feasible long-term. Right now, the LLM industry is not sustainable. Investors were promised literally the future in the present and it is now undeniable that ASI, AGI or even moderately competent general purpose quasi-autonomous systems won't happen anytime soon. The reality is that there is not space for all these players in the market in the long-term. LLMs won't go away but the vast majority of mainstream providers will definitely do
Yes, during the 2000's there was the "mashup" fads. People creating companies around mashing data from one service to another. Like putting Craigslist listings on a Google Map.
And guess what, all those mashup companies didn't last a couple of years. Because they didn't have a direct access to data.
This is heavily context dependent... There are plenty of situations where everyone knows the relevant factors, it's who has possession of land, resources, people, etc.
> Which brands do people trust? - Which people do people of power trust?
These are often at odds with each other. So many times engineers (people) prefer the tool that actually does the job, but the PMs (people of power) prefer shiny tools that are the "best practice" in the industry.
Example: Claude Code is great and I use it with Codex models, but people of power would rather use "Codex with ChatGPT Pro subscription" or "CC with Claude subscription" because those are what their colleagues have chosen.
Data has historically been a moat, but I think now more than ever it's a moat of bounded size / utility.
The biggest data hoarders now compress their data into oracles whose job is to say whatever to whoever - leaking an ever-improving approximation of the data back out.
DeepSeek was a big early example of adversarial distillation, but it seems inevitable to me that frontier models can and will always be siphoned off in order to produce reasonably strong fast-follow grey market competition.
I find the premise that coding is one of the hardest problem for LLMs flawed. Isn't coding the easiest area for AI, with lots of data to train and easily verifiable?
What if the only moat is domains where it’s hard to judge (non superficial) quality?
Code generation, you don’t see what’s wrong right away, it’s only later in project lifecycle that you pay for it. Writing looks good to skim, is embarrassingly bad once you start reading it.
Some things (slides apparently) you notice right away how crappy they are.
I don’t think it’s just better training data, I think LLMs apply largely the same kind of zeal to different tasks. It’s the places where coherent nonsense ends up being acceptable.
I’m actually a big LLM proponent and see a bright future, but believe a critical assessment of how they work and what they do is important.
If had to answer this question 2 years ago, I wouldn't have said software was a "don't see it's bad until later" category, with compilers and it needing to actually do something very specific. However, business slides are full of exacting facts and definitely never contains generic business speak masquerading as real insight /s.
This feels like telling a story after the fact to make it fit.
I agree, and by all accounts the success of coding agents is due to code being amenable to very fast feedback (tests, screenshots) so you can immediately detect bad code.
That's in terms of functionality, not necessarily quality though. But linters can provide some quick feedback on that in limited ways.
I feel like algorithmic/architectural breakthroughs are still the area that will show the most wins. The thing is that insights/breakthroughs of that sort that tend to be highly portable. As Meta showed, you can just pay people 10 million to come tell you what they're doing over there at that other place.
Hasn't this been proven true, many times now? Just look at the difference between ChatGPT 3 and 3.5, for example (which used the same dataset). That, and all the top performing models have large gains from thinking, using the exact same weights.
And, all the new research around self learning architectures has nothing to do with the datasets.
You get some anecdotal evidence and immediately post a hot take claiming to have discovered a new invariant?
I guess a bunch of us, including myself have taken the engagement bait here but does it really take somebody saying something stupid to start a conversation on something?
Marketing/relationships is the only moat, not data. You can have amazing data and make an amazing product, and some asshat with a product that barely works and really tight marketing will crush you. Then people will ask why there isn't a product like yours on the market, all while ignoring all your marketing material.
Companies always try to make it seem like data is valuable. Attention is valuable. With attention, you get the data for free. What they monetize is attention. Data is a small part to optimize the sale of ads but attention is the important commodity.
Attention is not a moat, it's the thing that's in the castle's treasure room. Without something that makes your service sticky attention may well just walk right out the door.
I feel like the the data to drive the really interesting capabilities (biological, chemical, material, etc, etc, etc) is not going to come in large part from end users.
It's the other way around. You gather user data so that you can better capture the user's attention. Attention is the valuable resource here: with attention you can shift opinions, alter behaviors, establish norms. Attention is influence.
Corruption is the only moat. Oligarchs can buy anything and funnel attention and money into it, creating financial success for shareholders despite poor leadership, zero social responsibility, suboptimal ideas and execution (see: Tesla)
Just commit fraud repeatedly while owning the people who run DoJ, easy peasy, no amount of attention or cash flow can displace that.
What's annoying is that companies capture user data and then lock it into their platforms, transform it, and resell it. But it is really the user's data that they're selling back to us. I would like regulation here, you capture my data then I can pick who you must and must not share it with.
Vertical integration.
Horizontal integration.
Cross- and/or mass-relationship integration.
Individual relationship investment/artifacts.
Reputation for reliability, stability, or any other desired dimension.
Constant visibility in the news (good, neutral, sometimes even bad!)
A consistent attractive story or narrative around the brand.
A consistent selective story or narrative around the brand. People prefer products designed for "them".
On the dark side: intimidation. Ruthless competition, acquisitions, law suits, reputation for dominance, famously deep pockets.
To keep someone is easier. Tiny things hold onto people: An underlying model that delivers results with less irritation/glitches/hoops. Low to no-configuration installs and operation. Windows that open, and other actions that happen, instantly. Simple attention to good design can create fierce loyalty, for those for whom design or friction downgrades feel like torture.
Obviously, many more moats in the physical world.
reply