Criticism of AI startups ranging from uncontrolled cash burn, no clear route to profitability and valuations set for significant correction abound | Stefan Ciesla – Grain, Co Founder of Ayora responds!


Startup forecasts, valuations and capital requirements are all largely driven by the founders’ and their backers’ views on the future scale of the opportunity at hand. If the opportunity is vast and unprecedented, it is likely to attract investment to match that… I believe that artificial intelligence is one such opportunity, and that – if a ‘hype cycle’ is your preferred way of gauging new technologies – we are in fact still quite far from the “peak of inflated expectations”.

There are two reasons why I remain convinced of that. Firstly, the rise of the LLMs fundamentally changes the way computers are able to interact with the written and spoken word, those two most human forms of expression. For the first time, we are facing technologies that are on a path to viably perform similar real-world tasks as humans. That has important implications for how we think about target addressable markets – AI is unlikely to be constrained by traditional “IT budgets” and instead will likely end up competing for the vastly bigger human employee budgets.

Secondly, we have seen relatively little practical use of AI to date! Foundational AI models have improved in quality at unprecedented speed: since 2020, LLMs have taken their MMLU scores from ~44% (GPT-3) to ~90% (Gemini Ultra). At the same time, we have but scratched the surface in terms of deploying these now incredibly capable models to productive uses. Use case penetration is therefore yet to impact the industry, even if – as some believe – LLMs are already approaching “peak capability”.

With all that being said, creating sustainable value has always been hard, and that hasn’t changed in the current AI era. That’s likely one reason why some might baulk at the rise in AI-adjacent valuations. One of the most-talked about challenges that AI-powered software companies face is adding sufficient value on top of the foundational models. This is especially true for AI-powered startups. Data is AI products’ fuel; your customers probably have more of that data than you do, and they likely have sufficient technical expertise to deploy the foundational models within their organisations in-house.

A startup’s value-add typically starts with identifying a genuinely good use case for AI. What does that look like? Firstly, any foundational LLM should be deployed as a means to an end, and not as the end in itself. That might sound obvious, but is all too frequently forgotten in the niche in which we’re building (LegalTech). A (genuine) case for a multi-model approach is also a good sign. And on top of all that, a good use case should be practical: does it allow for establishing an efficient data pipeline on one hand, and for harnessing a “risk-adjusted” capability of the LLMs on the other? Getting hold of the right data and gaining users’ trust are both key considerations that should be thought of at the product level.

Good use case or not, it is also true that AI startups can be (but don’t necessarily have to be) capital-intensive by nature. I recently attended a start-up conference where many AI-native startups were asked for their current ‘hair on fire’ problem. Many of these companies were (pre-)seed and the majority said “access to GPUs”. That sent a shiver down my spine, as that pointed to the significant burn these companies are facing in R&D activities, whilst their big tech competitors train ever more generalisable models. Also, hardly anyone mentioned

‘GTM’, ‘customer acquisition’ and ‘long sales cycles’, which likely means they were not commercialising their products just yet – needless to say, increasing their risk profile.

In summary, I firmly believe that today’s startups are building in uniquely exciting times. That said, AI or not, the devil is still in the details!

Leave a Reply