Jeff Haynie, CEO of Agentuity, a startup that aims to build the industry’s first “agent-native cloud,” hints that a billion AI agents — the number mentioned by Salesforce CEO Marc Benioff as a stretch goal — may be an underestimate. Haynie envisions trillions of AI agents.
A billion here, a trillion there, let’s just say the global network will eventually have as many AI agents as there are people, and then some. The question is, How do we build and deploy them?
Big, established software companies are introducing AI agents that fit within their existing architectures and ecosystems. Agentuity wants to start with a clean slate. The company’s tagline is, “The cloud, built for AI agents.”
Agentuity’s objective is to establish purpose-built foundational infrastructure for agentic computing. It’s an interesting idea, but one that won’t be easy to pull off given the investment and resources necessary to do it — and the far-reaching installed base of pre-agent infrastructure.
Agentuity, which raised $4 million in seed funding a few months ago, is making a run at it. The company just introduced its agent-native cloud. It seeks to address three requirements:
Agent-first design, where components are optimized for machine-to-machine interactions, including configuration through code, not human-centric interfaces.
Agentic operations, where interactions are run by and for agents, including observability, policy/governance, and mapping agent behavior across systems.
Agentic learning, in which agents self-heal and dynamically improve operations as the ecosystem evolves.
Q&A with CEO Jeff Haynie
When I talked to Haynie last week, he was in San Francisco at the AI Engineer’s World’s Fair. A few of the interesting points from our conversation that stand out: the idea that supervisor agents will delegate tasks to micro-agents; that agent performance can be judged by other agents; and that fewer connectivity standards may be required because agents will negotiate on how best to share info.
Here’s an excerpt of our conversation, edited lightly.
John Foley: What’s the challenge/opportunity?
Jeff Haynie: I started trying to build agents a little over a year ago. We were running into a lot of infrastructure problems. We came to a point of view that the infrastructure would have to change with AI agents. Until recently, 100% of the software was built primarily for a human — the interfaces, workflow, and tools were built assuming that a human would be the developer, operator, maintainer.
We saw two things. One, the infrastructure difference from edge- and cloud-based computing to agent-based computing was a different architectural paradigm. Edge-based computing is about low latency. How do you get the data as close as you can to the user that’s using it? Agents are the opposite. Agents are meant to run autonomously. The location where they exist may be more important about compute or power or GPU resources, things like that. They’re meant to run long lasting, and they’re highly latent. They might collaborate with a lot of other agents. So we need to reimagine the role of hardware and software in an agent-first world.
The other part is we think the future is that the operational maintenance, the development, etc., is going to be with intelligent agents that are building and maintaining software. And there will be a lot more of them, simply because they’re going to be machines. We’re going to have billions and eventually trillions of agents on the Internet that will be acting semi and fully autonomously and working together. That will look like cellular organisms, where they will be evolving, learning, building new skills and infrastructure. We think you need an agent-native cloud to support that infrastructure and those types of tools.
John Foley: In enterprises, autonomous agents are being used to perform business tasks and actions. But you’re talking about agents doing data management and orchestration. What space are you in?
Jeff Haynie: That’s a question of evolution and maturity of the capabilities. You’re right that today a lot of what are being called ‘agents,’ certainly by the bigger companies, are attaching a semi-nondeterministic — an LLM if you will — on top of a workflow, but augmented with a set of highly directed workflows that are deterministic and that are using an agent with some context to provide a level of autonomy and agency. That’s good. That’s much like the early internet where we had simple web pages and a simple request response model and fairly simple experiences that were wildly different than the experiences we have today with full blown applications.
It’s more about the maturity of the tooling and infrastructure and the capabilities of the AI, as well as the maturing of the companies, to be able to embrace this new way of building applications. That’s what we’re going after — what we think will, over the next year or two, emerge as more sophisticated agents.
John Foley: There’s a debate over out-of-the-box agents vs. custom agents. What’s your take?
Jeff Haynie: It’s ‘and’ not ‘or.’ You’re going to see every SaaS company trying to build agents on top of their legacy SaaS applications. That’s a survival mechanism in a world where software becomes relatively inexpensive to build and maintain. The second thing is, I think there’s a trend toward companies potentially moving away from package solutions to more bespoke [solutions], because it’s much easier now to code and maintain and build things. There’s a question: What happens with SaaS software over time? Is it going to be an augmentation of large, full featured platforms [from] traditional providers? There’s probably going to be a transitionary period, a mix. But in the long term, we’re going to have more highly customized, personalized software that’s going to do exactly what companies need versus off-the-shelf software that’s going to do 90% of the things you don’t need and the 10% you do need.
John Foley: If agents can build and support these custom environments, then maybe that becomes feasible.
Jeff Haynie: That’s right. The technology has to mature enough that you can do those tasks. I see a near future where you’ll describe what you need. The agent will know how to code what it needs to get the data and whatever access it needs and format. We will start to see more composable software, stuff we’ve been talking about for a long time, but that’s been hard to do as humans, hard to coordinate the labor, systems, and techniques.
John Foley: The conversation is going to shift to multi-agent environments. How’s that going to play out?
Jeff Haynie: We’re going to have to have like the internet building blocks of HTTP and DNS and vendor-neutral protocols that mostly focus lower in the stack. We’re seeing this with MCP and some of the other things where you need a vendor-neutral communication protocol. You need to start with things like discovery. Where does the agent live and how do you address it? Now, most agents have a borrowed identity. They have your identity in the enterprise. That will be another big area of interoperability. They’re going to have to have their own identity if they’re going have agency and if we’re going have guardrails to manage them.
We may have fewer standards at the application or data layer than today, because we’ll let the agents figure out the interchange and negotiate what they need from each other.
John Foley: Are your early adopters starting with a single agent? Do they have road maps for a dozen or more agents?
Jeff Haynie: What we’re seeing is that a singular agent most of the time needs multiple agents behind the scenes. What I’m calling micro-agents, similar to micro-services, where agents are good at one task then ask another agent to perform [a related] task for them in sort of a supervisor or task-delegation way. We’re in a multi-agent, complex world pretty quickly.
We have conversations with users who are building on our platform that started off with an ‘agent’ but when they got into it, they have multiple agents. They talk about a singular agent, but behind the scenes, it’s more of a plurality of agents collaborating.
The ROI is compelling for most people. There are the cases where people are talking about agents replacing humans, and that’s certainly a fear we all have over time. But our use cases are more about augmentation. How do I take away tasks like doing code reviews? There’s a version of that for every job role in most companies, a set of things that are routine-oriented, to free up our time and energy to be more based on ingenuity and innovation.
John Foley: What kinds of agents are your users building?
Jeff Haynie: The early types of agents are all over the board. One example of an agent someone built in open source that can receive DMARC emails, investigate that email, see if there are any deliverability problems, and report them to Slack. That’s one end of the spectrum, all the way to agents that do more sophisticated tasks around CRM. Most are around things that are hard to do for humans or highly repetitive like reading an email and performing some sort of agentic action.
John Foley: We’re seeing agents go rogue or do something the wrong way due to poor governance. What about data quality management?
Jeff Haynie: We have full observability, telemetry, logging, and everything agents are doing as part of our platform. That’s an important part of our strategy. The way we’re doing this internally is using LLMs as judges — have an LLM do a task around data transformation, then have another LLM judge the outcome. Did it do what we asked? Is it in the right format? Using LLMs as judges, or a consortium of agents judging agents, that dramatically increases performance.
John Foley: Early-stage agent startups are seeing enterprise software vendors moving in their direction. What about the competitive landscape?
Jeff Haynie: The opportunity we and other startups have is innovators dilemma. [The big players] still have to worry about the 300 other types of workloads they have to run in the cloud. We imagine an infrastructure that needs to be built from the ground up, much like the internet was. You couldn’t just put boxes in racks and call that cloud, right? You had to rebuild the stack, hardware and software to be truly cloud native. It’s the same thing. You have to rebuild the stack to be agent native. That requires coordination from hardware and software. It requires different types of high-speed data links than you have today, and it requires agents in the cloud to be able to operate and manage that, not humans.
We think we’re at an advantage because we’re not starting with all the legacy. We’re not starting with data centers that are trying to still do all kinds of general-purpose computing. They’re just sort of saying, it’s cloud computing, but with agents now. It’s a ‘marketecture’ change, not so much a fundamental redesign of the core architecture.
John Foley: One potential advantage is that you’re agnostic, in the sense that you don’t have an installed base of apps.
Jeff Haynie: That’s correct. We’re agnostic, not only in the agent types, but in the frameworks. We can run those different workloads just as well, and we can allow them to interoperate, collect data, do telemetry, all that in our cloud. We think that’s going to be a huge advantage. We don’t think a single framework or a single vendor is going to win all agents. It’s going to be very much heterogeneous agent infrastructure.