Rob Beeler + Kate Chakina, GM of Forecasting at Burt Intelligence
When news started circulating that Yieldex was being sunset, some publishers didn’t wait around for the formal announcement; they were already looking for something more adaptable. That’s where this conversation with Kate Chakina, GM of Forecasting at Burt Intelligence, starts. Burt has been actively working with teams in transition, helping them move off the Yieldex platform with as little disruption as possible.
In our conversation, Kate walks through what that process looks like: realistic timelines, the bumps that can slow teams down, and how Burt approaches accuracy and transparency differently. Forecasting isn’t just a feature for them; it’s the core of the product. From machine learning models to behavioral insights, they’re not just predicting the future. They’re making sure publishers understand why the forecast looks the way it does.
We also talked about broader shifts in forecasting expectations: why built-in tools like GAM’s aren’t enough for complex environments, the challenges of integrating third-party data, and why forecasting is less about eliminating volatility and more about staying ahead of it. If you’re navigating a transition away from Yieldex (or just rethinking how your team approaches forecasting) there’s a lot here worth digging into.
ROB: We’re hearing through members of the community that Yieldex is being sunset. Can you share with us what you know?
CHAKINA: Yes, we’re hearing the same. From what we understand, September looks to be the likely timeline. We’re already working with several publishers who are transitioning off Yieldex right now. Some of them had actually moved to us even before any formal sunset news, simply because they were looking for something more flexible or better suited to where their businesses were heading. Our focus has been on making that switch as smooth as possible—minimizing disruption, ensuring forecasts remain reliable, and helping teams get up to speed without losing momentum.
ROB: You’ve worked with some publishers moving off the Yieldex platform. What are some things people should anticipate in making the transition? How long does it take to implement?
CHAKINA: Typically, we tell teams to expect eight to 12 weeks for implementation. It depends a lot on the internal complexity.
For instance:
- How many systems are in play
- How their workflows are set up
- What level of granularity they expect from the forecasts
Some of the common bumps along the way involve aligning expectations, onboarding people into a new UI, validating the results, and just getting comfortable with a new way of doing things. We try to stay close throughout the whole process. It’s not just about setup, it’s about making sure the output is actually usable and trustworthy from day one.
ROB: As I understand it, one of the features people liked about Yieldex was that its forecasting was more accurate. How does Burt compare?
CHAKINA: Considering that forecasting has always been the product’s core focus, accuracy is a top priority for us. This is definitely something we invest a lot of time into, making sure we’re the best we can be. We use machine learning models, behavioral patterns, and historical performance to drive our forecasts, and we’re constantly tuning based on how those models perform against actuals.
We also calculate forecasted unique users, which adds another layer of insight for planning. But beyond just accuracy, we try to give teams more visibility into why the forecast looks the way it does, so they’re not just taking it on faith.
ROB: Is it fair to say that the tools to forecast have changed? There’s the cloud, AI, Machine Learning, etc.
CHAKINA: For sure. We’ve had machine learning in place for a long time, but the ecosystem around it has come a long way. The cloud makes things faster and more scalable, sure. But honestly, it’s things like cleaner log-level data, better data pipelines, and tools that help explain why a forecast looks the way it does that have made the real difference. AI is in the mix, of course, especially around pattern recognition and behavioral modeling.
ROB: Cool, but Google Ad Manager has forecasting built in. What are your thoughts?
CHAKINA: That’s a totally fair question. GAM’s built-in forecasting is a solid starting point for standard use cases, but many publishers require more depth and flexibility than what’s available out of the box. One of the consistent pain points we hear from publishers is the lack of transparency; when something looks off, it’s hard to know why. And while GAM handles a lot well, we’ve seen accuracy start to fall off in more complex or narrowly targeted forecasts.
Additionally, many of our partners operate within complex ad tech stacks, including proprietary systems, and need a solution that can integrate seamlessly into their infrastructure.
ROB: What is the state of third-party data aggregation? Easy? Hard? Is there another level?
CHAKINA: It’s still hard. Even with better APIs and more structured data, integrating external signals into forecasting is rarely plug-and-play. You’ve got formatting issues, inconsistent taxonomy, and delays to contend with. Honestly, we don’t try to bring in everything. We mostly focus on integrating with order management systems and the core platforms our customers actually rely on day to day. We’d rather get the fundamentals right than chase every data source just because it’s available.
ROB: My main concern is website traffic, and getting less of it from search and social. How can forecasting help me offset a decline in traffic?
CHAKINA: Forecasting won’t fix the traffic problem, but it helps you respond to it more effectively. If your traffic shifts, forecasting can help you see how that affects your future inventory. For example, what’s going to be available, what’s at risk, where there’s opportunity, and so on. It gives teams a clearer picture of where things are trending so they can adjust pricing, reallocations, or sales strategy accordingly. It’s less about preventing volatility and more about staying ahead of it.
ROB: What is the biggest misconception publishers have about forecasting today?
CHAKINA: One big one is expecting a forecast to be a static, definitive number. The reality is forecasting is about understanding possibilities, not certainties. The best forecasts give you a range and help you reason through what could happen, not just what’s most likely.
Another is thinking that forecasting runs itself. The teams we work with who get the most out of it are the ones who engage with the process, ask questions, and use the forecast as part of a bigger planning conversation. It works best when it’s a dialogue, not just an output.
ROB: The ad tech world has changed a lot in the last decade. Not just in terms of tools, but in how teams work together. What kind of working relationship do you think publishers actually value now?
CHAKINA: I think people want real partnership not just a vendor who drops off a tool and disappears. The teams we work with want someone who’s reachable, adaptable, and understands how decisions get made inside a media org. It’s less about having every feature out of the gate and more about knowing someone is on the other end who’s actually paying attention. We try to be that: quick to respond, open to feedback, and grounded in the reality of what publishers are up against.
ROB: Forecasts are rarely 100% right. What do you do when they’re wrong?
CHAKINA: Honestly? We dig in. The important thing isn’t pretending it never happens, it’s figuring out why it happened. Was it a traffic spike? Something mis-tagged? A campaign that came in differently than expected? Forecasting isn’t a crystal ball; it’s a system that needs feedback to get better. We’ve built in checks and diagnostics so we can trace what changed and adjust quickly.