Autonomous AI Can Build the Forecast, But Only If Your Data Can Survive the Audit
An interview on why data governance, auditability, and human accountability determine whether autonomous FP&A actually works.
The tension in 2026: CFOs want speed and demand control
By 2026, Artificial Intelligence (“AI”) is no longer a “future” topic in finance. Gartner reports that 59% of finance leaders say they use some form of AI in their finance function. CFOs also increasingly see Generative AI (“GenAI”) as useful for explaining forecast and budget variances, a practical, low-friction entry point into FP&A work.
But the leap from “AI that explains” to “AI that decides and acts” is where many finance organizations stall. Deloitte’s CFO Signals shows finance leaders’ top concerns about enabling Generative AI include technical skills (65%), Generative AI fluency (53%), and risk of adoption (30%), a blend of capability gaps and risk aversion that becomes even sharper when autonomy enters the room.
And then there’s the unglamorous blocker: data quality. Gartner notes poor data quality costs organizations at least $12.9 million per year on average, a reminder that AI doesn’t fix messy inputs; it can amplify them.
Why Data Quality Is the Real Barrier to Autonomous AI
Against that backdrop, I spoke with Robyn Halbot, an EPM planning architect who has built both ML forecasting tools and enterprise planning applications, about why “good data” is the real reason CFOs hesitate to trust autonomous systems, and what autonomous FP&A looks like when done safely.
Robyn has delivered over forty planning and forecasting implementations and has seen firsthand why trust breaks down long before AI enters the process. Her view is simple: autonomy is not a feature. It’s an operating model that must be auditable.
Interview: Building Trust in Autonomous AI for FP&A
Andrew A. Ross:
Robyn, before we get into Autonomous AI, give us your quick origin story. How does your background connect to what people now call “Autonomous AI”?
Robyn Halbot:
I’ve spent fifteen years implementing planning and forecasting systems, OneStream, Anaplan, Oracle EPBCS, Prophix, across banking, retail, manufacturing, insurance, public sector. Over forty projects, about half at enterprise scale. I’ve been the person in the room when a CFO asks why the numbers don’t tie, or why the forecast is three weeks late, or why nobody trusts the model.
A few years ago, I co-founded a company that built ML-driven forecasting tools. We integrated macroeconomic indicators, ran sensitivity analyses, generated scenarios. The technology worked. But I couldn’t find product-market fit. I spent months pitching to CFOs and planning teams, and I kept running into the same walls: data quality issues, governance concerns, and honestly, a lack of clarity on what problem we were actually solving for them.
That experience shaped how I think about AI in finance. The hardest part was never the algorithm. It was the data. It was the governance. It was understanding what finance leaders actually need versus what technologists think they should want.
What’s being called “Autonomous AI” today is really an evolution of that same discipline. But the foundation is identical: if your data isn’t clean, if your definitions aren’t consistent, if your processes aren’t auditable, autonomy just scales your problems faster.
That’s why Gartner’s stat about $12.9M average cost from poor data quality isn’t abstract. It’s the tax organizations pay before AI even enters the picture.
Why Data Matters More Than the Model
Andrew:
Let’s go straight there. You’re saying the biggest lever isn’t the model, it’s the data?
Robyn:
In finance, absolutely. I’ve seen this play out dozens of times.
You can have the most sophisticated forecasting engine in the world, but if your product hierarchy changed mid-year and nobody updated the mappings, you’re forecasting against the wrong structure. If customer IDs don’t reconcile between your CRM and your ERP, your revenue attribution is fiction. If cost center ownership is ambiguous, your expense forecasts will get overridden by every department head who doesn’t trust them.
These aren’t edge cases. This is the norm in most organizations I work with.
Autonomous AI doesn’t fix broken master data. It doesn’t resolve inconsistent revenue recognition timing. It doesn’t reconcile your GL to your planning model. What it does, if the foundation is weak, is produce confident-sounding answers built on unreliable inputs. And that’s actually worse than no automation at all, because now you’ve got machine-generated fiction that looks authoritative.
The CFOs who get this right invest in the boring fundamentals first: common definitions, documented lineage, reconciliation controls, change visibility. It’s not glamorous, but it’s what makes AI trustworthy.
Why CFOs Hesitate to Trust Autonomous AI
Andrew:
CFOs are identifying what AI in finance can do, but Autonomous AI building the plan still makes them nervous. Why?
Robyn:
Because forecasting isn’t just a math problem, it’s an accountability problem.
When a planning analyst builds a forecast, they own it. They can explain why they assumed 3% growth instead of 5%. They can point to the customer conversations, the pipeline data, the operational constraints that informed their judgment. When the CFO presents that forecast to the board, there’s a human chain of reasoning they can trace.
When an AI drafts a variance explanation, that’s helpful, it’s saving time on routine analysis. But when an AI builds the plan and recommends headcount cuts or capex deferrals, the CFO needs to answer much harder questions: What data did it use? What assumptions did it make? How did it weigh trade-offs? Can I defend this to the audit committee?
Deloitte’s survey captures this tension. It’s not that CFOs don’t want automation, they do. It’s that they’re uncomfortable with untraceable automation. They need to maintain human ownership of the assumptions, even if AI is doing the computational heavy lifting.
The organizations that succeed with autonomous AI in FP&A will be the ones that design for transparency from day one, not the ones that bolt on explainability after the fact.
The Autonomous FP&A Maturity Model
Andrew:
What’s the most realistic role for Autonomous AI in Financial Planning and Analysis in 2026?
Robyn:
The realistic path is graduated autonomy with clear human checkpoints.
I think of it as a ladder:
Level 1 – Assist: This is where most organizations are today. AI summarizes results, drafts variance narratives, answers ad-hoc questions about the data. Gartner’s finding that GenAI’s most immediate impact is explaining forecast variances fits squarely here. It’s valuable, it’s low-risk, and it builds organizational comfort.
Level 2 – Execute bounded workflows: AI refreshes the weekly forecast automatically. It reconciles actuals to plan and flags exceptions. It runs standard scenario packs and produces management reporting with consistent commentary. Humans set the parameters; AI executes within those guardrails.
Level 3 – Propose decisions with evidence: AI recommends actions, staffing adjustments, pricing changes, investment timing, with confidence scores and supporting data. But a human reviews, approves, and owns the decision. This is where adoption slows, because accountability is real.
Gartner predicts 40% of enterprise applications will embed task-specific AI agents by end of 2026. That’s significant. But I’d expect most finance organizations to be operating at Level 1 or 2, with selective experiments at Level 3 in lower-stakes domains.
The jump to full autonomy, AI making consequential decisions without human approval, is further out for finance than the hype suggests.
Governance, Risk, and Why AI Projects Fail
Andrew:
So, what’s the catch? Why isn’t every CFO rushing to Level 3?
Robyn:
Because autonomy without governance gets killed, and it should.
Reuters reported that Gartner expects over 40% of agentic AI projects to be canceled by 2027 due to high costs, unclear ROI, and overstated capabilities. That’s not pessimism; that’s pattern recognition. We’ve seen this cycle before with RPA, with predictive analytics, with blockchain in finance. The technology works in demos. It fails in production when organizations haven’t done the foundational work.
CFOs are right to be cautious. They’re looking for a clear business case with realistic value, not vendor slideware. They want to see governance and controls that can survive scrutiny, from internal audit, from regulators, from the board.
And regulators are paying attention. The UK’s FCA has already flagged new risks tied to speed and autonomy in financial services, emphasizing that accountability still rests with humans under existing rules. Even if you’re not a bank, CFOs hear the same message: autonomy increases the governance burden, it doesn’t eliminate it.
The winners in 2026 won’t be the organizations with the flashiest AI demos. They’ll be the ones who can prove their AI is behaving correctly, and explain exactly how it reached its conclusions.
A Practical Framework for Trusting Autonomous AI
Andrew:
If a CFO asked you, “How do I trust an Autonomous AI forecasting solution?” what would your response be?
Robyn:
I’d give them a framework that makes AI earn trust incrementally:
1. Data readiness gates (before the model runs):
- Automated reconciliations between source systems and the planning model
- Completeness checks that flag missing or stale data
- Change visibility, if a hierarchy changed, if an assumption was overridden, it’s logged
2. Forecast defensibility:
- Back-testing against baseline methods (does AI actually beat a simple trend?)
- Confidence intervals and error decomposition (where is the model uncertain?)
- Explicit, documented assumptions, not just outputs, but the reasoning chain
3. Agent guardrails:
- Read-only mode first, then bounded execution based on approvals
- Workflow gates for any material changes
- Full audit trail of prompts, inputs, and outputs
- Kill switch and rollback capabilities
4. Operating model:
- Clear ownership, someone is accountable for AI outputs, not “the algorithm”
- Training and fluency building (Deloitte’s survey shows skills gaps are a real barrier)
- Regular review cycles where humans validate AI recommendations against judgment
If an organization does these four things, they can move up the autonomy ladder with confidence. If they skip them, they’re building on sand.
What Autonomous AI in FP&A Will Look Like in 2026
Andrew:
Last question, what’s your prediction for Autonomous AI in FP&A, specifically in 2026?
Robyn:
In 2026, the AI solutions that win won’t be the ones with the most agents or the flashiest demos. They’ll be the ones that operationalize trust:
- Strong data foundations so the forecast isn’t fiction
- Practical AI in FP&A starting with variance explanations, reporting automation, scenario generation
- Governance that scales with autonomy, audit trails, approval workflows, human accountability
- Proven ROI on specific use cases, not vague promises about transformation
Autonomous AI won’t replace CFO judgment in 2026. But it will increasingly compress cycle times, expand scenario coverage, and standardize decision support, for organizations whose data can survive scrutiny and whose controls can survive autonomy.
And honestly? I’ve started to wonder if we’re even asking the right question. Everyone’s focused on whether AI can build a more accurate forecast. But in my experience, the CFOs I work with don’t actually need a more precise number, they need help thinking through what happens when assumptions change. That’s a different problem than “predict the outcome better.” It’s about scenarios, trade-offs, strategic alignment. I’m exploring that more in some upcoming writing, because I think it changes what “valuable AI in FP&A” means.
But that’s a bigger conversation. For now, the foundation is clear: get your data right, get your governance right, and let AI earn trust one use case at a time.
Editorial Perspective: Autonomous AI Is a Multiplier, Not a Replacement
If you read Robyn’s answers too quickly, you might walk away thinking this is another “AI is coming” article. It isn’t. It’s a warning label, and a roadmap.
The hype narrative says autonomous AI will build the plan faster than humans ever could. Robyn’s narrative is more useful: autonomy is only as strong as the data discipline and governance beneath it. That’s not a philosophical point. Gartner has long argued that poor data quality imposes enormous costs on organizations (often cited at $12.9M per year on average). In FP&A, the damage isn’t only financial, it’s decision-quality. Bad hierarchies, mismatched customer IDs, inconsistent revenue timing. Those aren’t “analytics issues”, they are forecast killers. Autonomous AI solutions don’t make those problems disappear. They scale them.
This is why CFO hesitation isn’t irrational conservatism, it’s professional responsibility. When an AI writes a variance explanation, the CFO can treat it like a draft. When an autonomous AI solution proposes headcount actions, pricing shifts, or a new capital plan, the CFO must answer a harder question: “Can I defend this decision, and explain exactly how it was produced?” Deloitte’s CFO Signals highlights that leaders see barriers in skills, fluency, and adoption risk, signals that even willing teams perceive autonomy as an operating-model shift, not a software upgrade.
In 2026, the most important finance capability may not be “who has the best model,” but “who can prove the model is behaving.” Autonomous AI is on a fast path into enterprise platforms; Gartner predicts a sharp jump in enterprise applications integrating task-specific AI agents by the end of 2026. That means CFOs won’t be able to ignore autonomy; they’ll need to shape it. And shaping it starts with insisting on the basics, as Robyn described: data lineage, reconciliation gates, back-testing, confidence reporting, explicit assumptions, approval workflows, and a real audit trail.
There’s also a quiet realism in the timing. A lot of AI initiatives will fail, not because autonomy is impossible, but because organizations will confuse demos with durable operating systems. Reuters reported Gartner expects over 40% of agentic AI projects to be canceled by 2027 due to high costs, unclear value, and “agent washing.” Finance leaders should treat that as a competitive advantage opportunity: the winners will be the ones who implement Autonomous AI with measurable ROI and controllable risk.
So, the editorial view is simple: autonomous AI is not replacing the CFO’s judgment in 2026. It’s compressing the planning cycle, expanding scenario coverage, and standardizing decision support, for organizations whose data can survive scrutiny and whose governance can survive autonomy. If you want Robyn’s interview to land with credibility, let the piece end on that point: Autonomous AI isn’t a magic wand. It’s a multiplier. If the foundation is strong, it multiplies speed and insight. If the foundation is weak, it multiplies error, at machine scale.
Many of these ideas reflect broader shifts in how accounting systems themselves are being redesigned for autonomy.
This Editorial Opinion reflects the perspective of the Auciera team based on ongoing conversations with accounting professionals and regulators.
About the Participants
Andrew A. Ross, CPA, CMA
Andrew Ross is the CEO and co-founder of Auciera. He is a CPA with experience in accounting, technology, and advisory services, and focuses on how system design, data architecture, and automation can improve the delivery of professional accounting work.
Robyn Halbot, MBA, BSc, PMI-ACP
Robyn Halbot is Principal at AshPoint Solutions and has fifteen years of experience as an EPM Planning Architect, having delivered over forty planning and forecasting implementations across OneStreamXF, Anaplan, Oracle EPBCS, and Prophix for organizations in banking, retail, manufacturing, insurance, and the public sector. She previously co-founded an ML-based forecasting startup that explored integrating macroeconomic indicators with financial data for scenario analysis. Robyn is currently building LLM applications for EPM workflows and focuses on the intersection of enterprise performance management, AI, and practical automation.
References and Research Sources
Deloitte. (2024). CFO Signals™ 1Q 2024: What North America’s top finance executives are thinking, and doing (with a focus on Generative AI in the finance organization and the enterprise).
Gartner. (n.d.). Data quality: Why it matters and how to achieve it. (Includes Gartner research citing an average annual cost of at least $12.9M from poor data quality.)
Gartner. (2024, June 27). Gartner survey shows 66% of finance leaders think generative AI will have most immediate impact on explaining forecast and budget variances.
Gartner. (2025, August 26). Gartner predicts 40% of enterprise apps will feature task-specific AI agents by 2026, up from less than 5% in 2025.
Gartner. (2025, November 18). Gartner survey shows finance AI adoption remains steady in 2025.
Reuters. (2025, June 25). Over 40% of agentic AI projects will be scrapped by 2027, Gartner says.
Reuters. (2025, December 17). Agentic AI race by British banks raises new risks for regulator.
Further Reading on Autonomous AI and AI-Native Accounting
For readers interested in exploring how these themes connect to modern accounting system design and AI-native financial workflows, the following resources provide additional context.

