Simply SharePoint
SharePoint is everywhere — but good guidance for real users? Not so much. I’m Liza Tinker: consultant, trainer, and the one teams call when things get messy.
This podcast is your go-to for real talk, real solutions, and a whole lot of clarity — minus the jargon. Whether you're managing sites, cleaning up document chaos, or just trying to make things work, you’ll find practical tips and insight from the creator of Fix the Mess™, the training series helping real people get SharePoint under control.
Simply SharePoint
The Hard Truths of Building a SharePoint Agent
Building a SharePoint agent looks simple in the marketing demos — just click Create, add your site, and you’re done. But anyone who has tried to build a truly accurate, enterprise-ready agent knows the reality is far more complex.
In this episode, I share what I’ve learned after spending the past several weeks deep in the trenches building a real SharePoint agent. This is not the polished, theoretical version of AI adoption. This is the messy, difficult, strategic reality that most organisations discover far too late.
I break down the three hard truths every organisation must understand before they even think about deploying an agent:
The Metadata Mountain — Why AI collapses without a solid information architecture, and how metadata determines every answer your agent gives.
The Unicorn Skills Gap — The rare combination of SharePoint architecture, IA, prompting, business analysis, and governance expertise required to build an agent that actually works.
The Long Road of Governance and Trust — Why the real work begins after the agent is built, and how to create a sustainable model for accuracy, oversight, and responsible use.
This episode goes beyond the hype to explore the realities of content readiness, skills gaps, organisational maturity, and the long road of governance that follows launch. If you're responsible for Microsoft 365 strategy, AI adoption, information architecture, or governance — this conversation will give you the clarity you need to plan effectively.
Before you click “Create agent,” listen to this. This is the view from the trenches, and it might change the way you approach AI entirely.
The Hard Truths of Building a SharePoint Agent
Hello and welcome back to the show. For years on this podcast and on my blog, I’ve been talking about a fundamental principle: there is no AI without IA. You cannot have effective Artificial Intelligence without a solid Information Architecture. It’s a concept I’ve championed because I’ve seen, time and again, how critical a well-organized digital foundation is.
Well, for the past several weeks, I’ve moved from theory to practice. I am knee-deep in the trenches, building a custom AI agent in SharePoint. And I can tell you, everything I’ve been saying is not just being validated; it’s being magnified. The marketing demos make it look like a simple, three-click process. The reality is a demanding, complex, and deeply strategic undertaking.
This experience has crystallized my thinking into what I’m calling the three hard truths of building a SharePoint agent. These aren’t new discoveries for me, but rather the real-world, practical confirmation of the challenges I’ve been anticipating. Today, I’m going to take you beyond the hype and share a view from the trenches—what it really takes to build an agent that works, and why the foundational work is more important than ever.
The first hard truth, and this is where the rubber truly meets the road, is what I call the Metadata Mountain.
As I’ve said for years, your AI is only as good as the data you feed it. This isn’t a new concept, but seeing it play out with a live agent is a stark and powerful confirmation. When you first generate an agent, it’s essentially a powerful but ignorant engine pointed at a chaotic library of raw text. It’s trying to find patterns, but it lacks context. And as I’m seeing in my current project, a lack of context leads directly to unreliable, and frankly, unusable answers.
The project I’m working on requires the agent to answer very specific questions about policies and procedures. In the initial build, I asked it a straightforward question: “What is the approval process for a contract over $50,000?” The agent returned a paragraph from a document from 2019 that was completely out of date. It missed the current, correct policy because that policy was formatted in a table within a newer document, and the agent couldn’t discern the hierarchy or recency.
This is the classic probabilistic failure of LLMs. They make an educated guess. But in a business context, particularly around compliance or finance, an educated guess is a liability. You need deterministic, factual answers. And this is where the Information Architecture work becomes non-negotiable. I went back in, and for that content set, I defined and applied a clear metadata schema: columns for Policy Name, Effective Date, Status (like ‘Active’ or ‘Archived’), and Applies To Department.
The moment I did that and asked the exact same question, the agent’s response was transformed. It didn’t just give me the right answer; it was able to cite the source, state the effective date, and confirm it was the active policy. It went from a liability to a reliable tool. This process is a perfect, practical demonstration of the IA before AI principle. It’s not about discovering that metadata is important; it’s about confirming how critically important it is and the sheer amount of work required to get it right. It involves a deep content audit, stakeholder interviews to define the right taxonomy, and a clear plan for applying that metadata retroactively and to all new content moving forward. This isn’t a step you can skip. It is the entire foundation.
This leads directly to the second hard truth, which is the Unicorn Skills Gap.
As I’m navigating this project, it has become abundantly clear that the skillset required to do this work successfully is extraordinarily rare. The marketing suggests that any business user can just click a button and build an agent. That is dangerously misleading. The person, or more realistically, the team that can pull this off needs to be a unicorn—a mythical beast of diverse and deep expertise.
I’ve been mapping out the roles and responsibilities on my own project, and it’s a complex web. First, you need a SharePoint Architect. Not just someone who knows how to create a site, but someone who understands modern information architecture, hub sites, permissions models, and content types at a deep, strategic level. They lay the foundation.
Second, you need a Business Analyst. This is the person who bridges the gap between the technology and the business. They have to sit down with stakeholders and ask the hard questions: What problems are we actually trying to solve? What are the top 20 questions people ask? How will users phrase their prompts? What logic and reasoning does the agent need to follow? This is a crucial, and often overlooked, role.
Third, you need the AI Specialist. This is the person who lives in Copilot Studio. They are the prompt engineers, the ones who can write the instructions that guide the agent’s behavior. They understand how to connect knowledge sources, how to test and validate the agent’s responses, and how to interpret the analytics to see where it’s failing.
And finally, you need a Project Lead or a Cross-Functional Leader. This is the conductor of the orchestra. They have to coordinate with the content owners, who are the subject matter experts. They have to work with IT to ensure the environment is stable and secure. They have to manage the expectations of business leaders. This role is pure communication, collaboration, and project management.
As you can hear, this is not a one-person job. The idea that a single employee can just add “agent builder” to their list of responsibilities is a fantasy. The reality is that a successful agent project is a significant organizational commitment that requires the formation of a dedicated, cross-functional team. It’s a strategic initiative, not a side project.
And that brings me to the third and final hard truth, which is the Long Road of Goverannce and Trust.
Even if you conquer the Metadata Imperative and assemble your unicorn team, the work is far from over. In fact, the most challenging phase is the one that never ends: the ongoing governance and the cultivation of trust. This is what I’m planning for now, because a successful launch is meaningless without a plan for Day 2 and beyond.
The governance question is twofold. First, there’s content governance. Who is responsible for the information the agent uses? When a policy is updated, who ensures the old document is archived and the agent is pointed to the new one? Without a clear and enforced content lifecycle, your agent’s knowledge base will inevitably decay. Its answers will become outdated, and it will transform from a valuable asset into a dangerous liability. This requires defined roles, clear accountability, and automated processes where possible.
Second, there’s agent governance. Who monitors the agent’s performance? Who reviews the questions users are asking, especially the ones the agent gets wrong? Microsoft’s own best practices state that agent building is an iterative process. You have to be constantly gathering feedback and refining the agent. This isn’t a part-time job; it’s a continuous operational commitment.
And all of this feeds into the ultimate goal: trust. But what does trust mean in the context of a probabilistic AI? This is a question I’m grappling with. My current thinking is that we must be very careful not to position the agent as an infallible oracle. We have to educate our users that it is a powerful assistant, a tool to help them find information faster, but not a replacement for their own critical thinking. The final judgment must always rest with the human.
We need to build a culture of “trust but verify.” The agent can get you 80% of the way there, but you are still responsible for that final 20%. This is a critical message for user adoption and risk management. We are in the early, experimental days of this technology. We have to be transparent about its limitations while we work to improve its capabilities.
So, there you have it. The three hard truths I’m seeing play out in the real world as I build this SharePoint agent: the Metadata Mountain, the Unicorn Skills Gap, and the Long Road of Governance and Trust. These aren’t reasons to give up on building agents. They are the strategic pillars you must have in place to succeed.
The simple “Create an agent” button is a gateway, not a destination. It opens the door to a challenging but ultimately rewarding process of transforming your organization’s information into true, actionable knowledge. By understanding these truths, by respecting the complexity of the task, and by committing to the foundational work of Information Architecture, you can move beyond the hype and build an AI assistant that delivers lasting, strategic value.
That’s all for this episode. This is a journey I’m still on, and I’ll be sharing more lessons from the trenches as I learn them. Thanks for listening, and I’ll see you next time.