
11 min read
Analytics didn’t break.
But at some point, it stopped keeping up with how people expect answers.
We didn’t set out to rethink analytics, but we had to admit something wasn’t working. For a long time, we were in a good place. The data was there, the workflows were familiar, and teams had more visibility than ever before. It felt like the industry had largely solved the problem of answering the tough “why” questions behind their traditional analytics platforms.
And yet, when you sat with teams and watched how work actually got done, something didn’t quite add up.
The same questions kept coming up. Why did this KPI change? What actually happened? Where should we look first? Even when the right data existed somewhere in the system, with all the right ways to visualize it, it still took too long, and too much effort, to get to an answer anyone felt confident acting on.
You could add dashboards. You could add alerts. You could layer in AI assistants.
But that core problem never really went away.
That’s the problem we ended up building Felix Agentic to solve.
The shift didn’t start inside analytics.
For a while, we framed this as an analytics problem. It wasn’t.
What had changed was expectation.
Ten years ago, no one would disagree that we helped make getting to the ‘why’ behind data simpler.
But now everyone, inside and outside the enterprise, got used to interacting with systems that make getting answers feel even simpler. You ask a question, you get something useful back, and you move on. It is fast, conversational, and it does not require you to understand how the system works underneath.
That experience does not stay confined to personal tools. It resets expectations everywhere.
Once people feel that level of ease, they bring it into their work. Workflows that used to feel normal, digging through dashboards, building segments, stitching together context, start to feel arduous.
At the same time, the technology available crossed an important threshold.
A few years ago, this would not have worked. Models could summarize, but they could not truly analyze. They lacked the key features we have today for multi-step reasoning, tool calls, and larger context windows that make it possible.
Now, they can break down a problem, determine what context they need, retrieve that context in pieces, and reason through a sequence of steps to arrive at an answer.
That combination, shifting expectations on one side and new technical capability on the other, is what created the opportunity.
There wasn’t a breaking point.
There was no single moment where the existing model collapsed.
Instead, what we saw was a gradual shift.
Teams had less time. Fewer resources. More pressure to deliver outcomes quickly.
And far less tolerance for workflows that required moving between tools just to get to an answer.
The expectation was not better analytics.
It was faster, more reliable understanding with less effort.
And that is a fundamentally different problem.
We thought this would be straightforward. It wasn’t.
Early on, we thought giving the system we built the most important events, along with some annotations, would be enough for it to answer questions.
That assumption broke quickly.
Digital experience data captures everything. Every click, API call, page load, and interaction. But most of that data, in isolation, does not mean much. This is very different from other analytics systems where everything is very neatly tagged, highly intentional, and well defined.
A click on an SVG element or an API call to a seemingly random endpoint tells you very little on its own.We realized the problem was not just about building an agentic analyst with the right tools to query the data.
If we wanted to stop at the agent just reporting the “facts” (What is my conversion rate? How is my funnel performing? What are people clicking on?) - we probably could have stopped with an agent that could build queries.
But to get it to truly behave like an analyst on our customers’ team, we realized the problem was teaching this agentic analyst how it should think, and what it should know about the data, which is entirely unique from customer to customer.
That required something we had not fully accounted for. A highly sophisticated context architecture. Not just more data, but a way to truly make use of the data, and deliver sophisticated interpretations of the results.
That turned out to be significantly harder than we expected, and far more important than anything else we were building.
Where agentic analytics can go wrong.
To start, you have to decide what it even means for someone, or something, to be an “analyst.” I don’t think an analyst is something that fetches my metrics. There’s already a more stable and faster way to do that. It’s called dashboards. And AI can help you build them.
When something becomes an analyst is when there is meaningful drill down, discovery, and interpretation of that data to derive insight and recommendations.
This is where things break if you don’t do it right.
If you just layer a Large Language Model on top and expect it to give insight and recommendations, you might as well have just copied and pasted a report into your favorite LLM and said “What do you think? How should we go make more money?”
Agentic systems will produce answers. The problem is those answers often sound reasonable, close enough that they do not immediately raise concern, all while being based on very little substance. “Your conversion rate is really high on this page today! You should find ways to send more traffic to this page!”
And in fast-moving environments, that can be enough.
The real danger is not clearly incorrect output. It is answers that feel right but are incomplete or subtly wrong, and get acted on without further validation.
At that point, the system is not just unhelpful. It is accelerating bad decisions.
Confidence without context is risk. And most teams won’t realize it until after they’ve acted on it.
The decisions we made along the way.
As we explored what this system could become, there were many directions we could have taken.
We talked about automating fixes, building A/B tests, closing the loop from insight to action.
Those ideas came up early and often.
But they all depend on an assumption we did not believe was true. That understanding is already solved.
It isn’t.
In order to take action, you have to have a very thoughtful and robust understanding. “Conversion rate is high on this page, send more traffic to this page!” That’s not truly actionable. Although it could show a fun demo!
If the understanding is even slightly wrong or underthought, every action that follows becomes unreliable. In that context, automation does not create value. It amplifies mistakes.
So we made a deliberate decision.
Focus on understanding first.
Not a dashboard copilot.Not a summarization layer.Not something limited by pre-built reports.
We set out to build an autonomous analyst. Something that can investigate, explain, and make its reasoning visible in a way teams can validate and trust.
Why we didn’t try to do everything.
We could have expanded into adjacent areas to become a single platform that does everything end to end. Experimentation, direct code changes, real-time personalization, etc.
But those are deeply specialized domains. A/B testing alone is a complex discipline with mature platforms that focus entirely on doing it well.
Going down those paths risked turning into a check-the-box exercise, which would have made both sides worse.
Technology tends to cycle between two extremes: a single platform that promises everything, or an ecosystem of specialized solutions.
This new world of Agentic and AI seems to have users looking for both! They want a one-stop-shop with best-in-breed agents all working with each other seamlessly.
Felix is designed to be that specialist for understanding. Understanding that enables action across your ecosystem.
We also had to rethink what “answers” mean.
At one point, we framed the goal as delivering perfect answers. Ask anything and get the right result every time.
A lot of stars have to line up to get the “perfect” answer every time. The expectation of an analytics agent should be a more collaborative process, just like with an analyst on your team, where follow ups and clarifications are part of the process to get to the best possible answer. Just like how you use your favorite LLMs every day. An initial question can seed the conversation, but you extract the most value when you refine with your follow ups.
This requires that each user understands how those answers were reached, whether they hold up, and how to refine them.
Analytics has always been iterative.
The difference now is that the system can participate in that process. It can surface reasoning, expose assumptions, and allow users to guide or correct it over time.
That is why Felix is not just an analyst.
It is also a partner.
The bet we’re making.
Felix Agentic is not about making analytics faster.
It is about reducing the effort required to get to understanding, increasing the reliability of that understanding, and making it accessible to more teams.
If the understanding is right, teams stop debating what happened and start acting on it.
That’s the difference.
And once you experience it, the old way starts to feel surprisingly slow.
If you want to see how this works in practice, you can watch the on-demand virtual launch.
We walk through how Felix Agentic investigates real experience data, how it explains what changed and why, and how teams move from debating signals to actually acting on them.







share
Share