Agent Experience — Weaviate Podcast #116 Recap
“In a few years, you won’t be able to deliver good developer experience if you’re not delivering great agent experience, because every developer will be using different agentic tooling to build and work on their projects.” — Matt Biilmann, CEO of Netlify, delivers this striking prediction that encapsulates why AI agent experience is rapidly becoming the next crucial frontier in software development.
Context & Background
As AI agents increasingly become intermediaries between humans and digital systems, the concept of Agent Experience (AX) is emerging as a critical consideration for technology companies. Episode #116 of the Weaviate Podcast brings together Matt Biilmann (Co-Founder and CEO of Netlify), Sebastian Witalec (Director of Education at Weaviate), and Charles Pierse (Director of Innovation Labs at Weaviate) to explore this nascent field.
Biilmann’s recent article “Introducing AX: Why Agent Experience Matters” has resonated throughout the tech industry, highlighting how the paradigm shift towards AI agents demands new approaches to API design, documentation, and system interaction. This conversation extends beyond theoretical discussions into practical considerations of how companies like Netlify and Weaviate must adapt their platforms for a future where both humans and AI agents are primary users of their services.
From Developer Experience to Agent Experience
The evolution from Developer Experience (DX) to Agent Experience (AX) represents a natural progression in how we design digital systems. Matt Biilmann explains how Netlify’s history offers a perfect case study:
“We focused really obsessively around the idea of developer experience, right? Like if you are building this kind of front-end experience, how do we make the path from writing your code, to having something running live in production on a URL, the shortest possible? And how do we really make that experience feel unintrusive and intuitive?”
This same philosophy now guides Netlify’s approach to AX, focusing on creating frictionless paths for AI agents. The company has already built a Netlify GPT that allows agents to deploy websites anywhere in the GPT ecosystem by simply mentioning “Netlify.” This deployment happens without requiring additional sign-up steps — maintaining the same principle of reducing friction that guided their approach to developer experience.
The implications extend beyond just adapting existing services. Biilmann explains that AX requires a fundamental rethinking of how APIs are structured: “APIs that we think of as made for machines, but in reality, they’re made for developers to write machine code. And they’re not necessarily written today for agents.” This recognition demands that companies like Weaviate consider not just how humans interact with their vector database, but how AI agents will leverage its capabilities.
The Need for New Standards and Protocols
Perhaps the most significant challenge in the emerging AX landscape is the lack of standardized approaches for agent-system interaction. Sebastian Witalec highlights this problem, noting that each company now creates custom solutions:
“If I have a service, it’s not a big deal, I could just go there and rewrite the plugin. But I think the whole idea is that what if you don’t have to rewrite that plugin? And there’s not just ChatGPT, there’s so many other kinds of LLMs out there.”
The conversation reveals several areas where new standards are urgently needed:
- Documentation formats — Beyond initial attempts like LLM.txt, more sophisticated formats are needed to help agents understand how to use services efficiently
- Tool discoverability — Agents need standardized ways to discover what tools are available at a given domain
- Version handling — Sebastian drew a parallel to HTTP status codes: “We need like the equivalent of a 204 for agents… if an agent comes in and says ‘oh, I get a 204,’ it’s like, there’s a new way of doing it, I should abandon it and try the new way”
- Error diagnosis — When builds fail or operations don’t complete as expected, agents need standardized ways to understand what went wrong
Agent-to-Agent Communication and Specialized Interactions
The podcast explores fascinating possibilities for how agents might communicate with each other in ways fundamentally different from human-human or human-computer interactions.
Sebastian Witalec posits that agent-to-agent communication could bypass natural language entirely: “Machine learning models use vector embeddings as a way to communicate with each other. They wouldn’t even need to talk to each other through words, they could use a completely different either language or format.”
This approach could potentially resolve ambiguities inherent in natural language. As Sebastian notes: “English is a very imprecise language, there’s so much ambiguity in it. But if you could have something that is extremely precise, the agents could actually use that to talk to each other.”
Matt Biilmann builds on this idea by suggesting specialized mini-agents for specific tasks: “There might be simpler if we can just give them access to our agent and ask our agent, like, why did it fail… you’re starting to see sort of these steps towards agent-to-agent communication, where you have more generalist agents that drive the whole workflow and then smaller specialized agents for solving maybe a specific problem.”
Connor further suggests that efficiency gains might come from bypassing traditional API formats altogether: “This idea of outputting a gRPC byte stream to query an API, rather than a JSON.”
These approaches could dramatically increase efficiency while maintaining human oversight — a crucial balance that all participants emphasized throughout the discussion.
The Challenge of Agent Bias and Tool Selection
An intriguing concern emerges around how agents will develop preferences for certain tools based on which ones they can use most effectively. Matt Biilmann frames this as a new form of SEO:
“Tools that invest a lot in agent experience and make a material difference in the sense of like, now they are actually better to use for LLMs and agents than other tools, they will start getting an outsized effect of the agent ecosystem, sort of making that the best path that they’ve seen work for users.”
This creates both an opportunity and a responsibility. Companies that create superior agent experiences may gain significant advantages as agents preferentially select their services. However, as Sebastian notes, there may be legitimate reasons for certain biases: “We have certain partners we like to work with, or we have certain best practices that we want people to follow… the agent should be more prescriptive and biased, whether it’s at the common sense and best practice level, or sometimes at a commercial level.”
For Weaviate users, this suggests the importance of not just making vector database capabilities available to agents, but making them exceptionally easy for agents to discover, understand, and utilize effectively.
The Importance of Debugging Visibility
One easily overlooked insight concerns how agents need to make their processes visible to humans for debugging. Matt Biilmann observed that sometimes primitive approaches work better than sophisticated ones:
“If you make an experience and on the one hand, you just stuff those, like the open API spec into ChatGPT’s context and start asking it to do stuff, you will probably get much worse results than going to operator and asking it to do stuff through the web UI… really Rube Goldberg style system, but I’m pretty convinced that for most apps, that system right now will probably work better.”
Why? Because the visual interface provides crucial context for humans to understand what’s happening. This suggests that effective AX isn’t just about making things efficient for agents, but ensuring transparency for the humans they serve.
Training Data Limitations and Documentation
Connor highlighted a surprisingly significant concern: what happens when your company contributes to LLM training datasets but then changes its APIs? “We have our pull requests to the Gorilla repository, which is apart of the training data set for Meta’s Llama models… Now, as we’re talking about, I’m kind of like, oh, well, what if we change our APIs and it’s got that old knowledge?”
This reveals a new form of “documentation debt” that companies must manage — not just updating their docs, but providing clear signals to agents about what knowledge is deprecated. For Weaviate users, this emphasizes the importance of consistent API versioning practices.
The Comparative Learning Advantage of Agents
Sebastian highlighted how agents can learn differently than humans, potentially more efficiently: “If an agent already has that knowledge, then we could go like, ‘Okay, you know this thing? Yes. You know this thing? Yes… Oh, this is new. Okay. From now on, I use this as a basis and let’s move on.’”
Unlike humans who might resist changes to familiar patterns, agents can immediately adapt when given clear guidance, potentially leading to faster adoption of improved approaches.
Practical Takeaways
For Beginners: Start with Documentation Optimization
The most immediate action you can take is reviewing your documentation with an agent-first mindset. As Connor noted about Weaviate’s experience: “We’ve gone through an exercise where we wanted to use Weaviate so that you could search through our documentation. And it wasn’t so straightforward, because we realized that the way we built our docs wasn’t very helpful for LLMs.”
Implementation steps:
- Test your existing documentation by asking an LLM to summarize how to use your product
- Identify where it struggles or makes incorrect assumptions
- Add explicit structure with clear headings, examples, and step-by-step instructions
- Consider creating a dedicated LLM.txt file as a starting point
For Intermediate Users: Design Agent-Friendly APIs
If you’re building or maintaining APIs that agents might use, consider how to make them more agent-friendly. Matt Biilmann suggests: “It’s kind of an aspect of that, but on the flip side, I also totally agree, right? Like docs is one of the obvious areas where it feels like we need to figure out what the best way, what the best standard way for an LLM to understand the current behavior of our system.”
Implementation steps:
- Create consistent parameter naming across endpoints
- Provide explicit error messages that explain not just what went wrong but how to fix it
- Design your API to handle partial information and ambiguity gracefully
- Implement versioning that clearly signals when methods are deprecated
For Advanced Practitioners: Build Agent-Agent Ecosystems
For those pushing the boundaries, consider how specialized agents might communicate with each other. As Matt Biilmann described: “We put a little button in our UI that just says, why did it fail? So it’ll take the build log, the code context, run through an LLM and give you like a diagnostic and a potential solution.”
Implementation steps:
- Create specialized micro-agents for common tasks in your domain
- Design structured communication formats between agents that maintain human readability
- Implement feedback mechanisms that help agents learn which approaches work best
- Consider how vector embeddings might enable more efficient agent-agent communication
Weaviate-Specific Relevance
The conversation around Agent Experience has profound implications for Weaviate users. As a vector database, Weaviate already operates in the realm of embedding-based interactions that agents naturally use for understanding semantics.
Sebastian’s point about agents potentially communicating through vector embeddings rather than text aligns perfectly with Weaviate’s core capabilities. This suggests Weaviate could play a central role in agent-to-agent communication systems, serving as both the storage and retrieval mechanism for semantic understanding between agents.
The discussion about optimizing documentation for agents directly applies to how Weaviate’s own documentation should evolve. The hybrid search capabilities that combine keyword and vector search are particularly relevant here — the same technology that powers Weaviate’s search could be applied to helping agents better understand how to use Weaviate itself.
For enterprises using Weaviate’s multi-tenancy and RBAC features, there’s an additional consideration: how will you manage agent permissions within your vector database? As agents become more autonomous, the security and access control features of Weaviate become even more critical.
Conclusion
The emergence of Agent Experience as a discipline marks a pivotal moment in how we design and build software systems. As Matt Biilmann frames it: “In a few years, you won’t be able to deliver good developer experience if you’re not delivering great agent experience.”
You can find the podcast at the following links!