Vertex AI RAG Engine with Lewis Liu and Bob van Luijt — Weaviate Podcast #112!
Weaviate Podcast #112 dives into the evolving landscape of enterprise AI with Bob van Luijt from Weaviate and Lewis Liu from Google. The podcast particularly focuses on the new Vertex AI RAG engine and shifting perspectives about AI systems and data management.
The Shift in Knowledge Representation
The conversation began with a compelling evolution in thinking about knowledge representation. Bob shares his journey from being a strong advocate of formal knowledge graphs to embracing more flexible approaches enabled by large language models. He illustrated this shift with a telling example from his consulting days, where even within a single company, different departments couldn’t agree on the definition of a “customer.” This challenge of rigid definitions extends globally — as Louis pointed out, even something as seemingly simple as defining a “lake” becomes complex when considering different cultural and linguistic contexts. Google spent years building their knowledge graph with strict ontologies, but the emergence of powerful language models is changing this paradigm, allowing for more natural and adaptable ways of understanding relationships and context.
The Emergence of New Programming Paradigms
While there’s much discussion about prompt engineering replacing traditional programming, the reality appears more nuanced. Bob made an insightful observation about the evolution of programming languages in the AI era. Using the example of trying to get an LLM to draw a precise circle, he demonstrated how we eventually return to needing formal specifications. Rather than eliminating programming languages, we might be witnessing the birth of new kinds of formal languages that bridge the gap between natural language and machine instructions. This synthesis suggests a future where programming becomes more accessible while maintaining the precision needed for complex systems.
Generative Feedback Loops: A New Approach to Data Quality
One of the most innovative concepts discussed was the idea of “generative feedback loops.” Unlike traditional RAG systems that operate in one direction (query → retrieve → generate), these loops create a continuous cycle of data improvement. Bob shared a practical example from manufacturing, where factories worldwide record data in different formats (Fahrenheit vs. Celsius, British vs. American English). Instead of relying on human data stewards to standardize this information manually, AI systems could automatically detect and correct inconsistencies, potentially solving long-standing master data management challenges.
The Evolution of RAG in Enterprise Settings
Lewis provided valuable insights into current trends, noting that while prompt engineering and fine-tuning are becoming less prevalent, RAG (Retrieval-Augmented Generation) is gaining prominence. The Vertex AI RAG engine aims to help developers quickly achieve 90–95% quality, with remaining optimization focused on engineering considerations. This trend suggests a maturation in how enterprises approach AI implementation, moving from experimental approaches to more structured, production-ready systems.
The Challenge of Knowledge Boundaries
A particularly interesting challenge emerged in the discussion: how to ensure enterprise AI systems use official internal data rather than potentially outdated training data. For example, when asking about Alphabet’s quarterly earnings, how do you ensure the model uses internal, authoritative sources rather than its training data? Lewis explained their approach to training models to explicitly distinguish between built-in knowledge and enterprise-specific information, highlighting the complexity of building trustworthy AI systems for enterprise use.
Concluding Overview
The discussion concluded with an optimistic outlook on the future of enterprise AI. The rapid implementation capabilities of modern RAG systems, combined with the potential for more sophisticated architectures using multiple models in GAN-like configurations, suggest we’re entering a new era of AI capability. As Lewis noted, we’re moving toward systems where models can naturally understand their limitations and appropriately leverage external knowledge sources, while maintaining the balance between creativity and factual accuracy that enterprise applications demand.