This interview analysis is sponsored by AnswerRocketand was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
Enterprise organizations face decision-making pressures that are strikingly consistent across industries. As product portfolios expand, markets fragment, and operations become more data-intensive, leaders report slower decision cycles, inconsistent execution, and missed opportunities — all despite heavy investment in analytics and AI. In many cases, the bottleneck is not access to insight, but the limits of human judgment as operational scope increases.
Research underscores the scale of the problem. IBM-linked industry analysis suggests that up to 90% of enterprise-generated data is unstructured and often never analyzed, constraining decision coverage across products and markets. IDC-associated estimates further indicate that 60-73% of enterprise data remains unused or “dark,” meaning most organizational data never informs analytics or strategy.
At the same time, adoption research from MIT’s Project NANDA shows a sharp gap between experimentation and impact. While more than 80% of organizations have piloted generative AI tools, only about 5% have deployed task-specific AI systems into production with measurable P&L results. The study finds that most initiatives stall not because of model performance or regulation, but because AI systems fail to integrate into workflows, retain context, or improve through use.
Governance gaps compound the issue. Gartner projects that by 2027, organizations will abandon roughly 40% of AI use cases due to fragmented or reactive governance rather than technical limitations.
To examine these challenges, Emerj’s ‘AI in Business’ podcast recently hosted a series of conversations with Jim Johnson of AnswerRocket, Michael Finley of AnswerRocket, and Vaithi Bharath of Bayer. Their discussions explore why traditional analytics models break down under complexity, how agentic systems must be engineered as enterprise software, and what it takes to deploy AI responsibly in regulated environments.
Drawing on these insights, this article examines how AI — when grounded in disciplined workflows, governance, and human-in-the-loop design — can compress decision cycles across complexity, with particular focus on:
- Scaling decision coverage across complexity: Using agentic AI systems to continuously monitor products, markets, and operations, surfacing decision-ready insights that human teams can validate and act on without being constrained by analyst bandwidth.
- Engineering AI agents with enterprise governance: Treating AI agents as software systems with defined objectives, scoped access, testing, and monitoring to ensure reliability, adaptability, and trust at scale.
- Accelerating decisions in regulated environments: Applying guided explainability to AI systems for streamlining review, validation, and documentation workflows while preserving auditability and human accountability.
Listen to the full episodes from the series below:
Episode 1: CPG Data Challenges to Business Value with Agentic AI – with Jim Johnson of AnswerRocket
Guest: Jim Johnson, President, AnswerRocket Consulting
Expertise: Enterprise AI Strategy; Agentic AI Deployment; AI Governance; Consulting Leadership in Pharma & Life Sciences, Retail, Transportation
Brief Recognition: Jim Johnson leads AnswerRocket’s consulting solutions business, overseeing the delivery of enterprise-grade, in-production AI and agentic systems that drive measurable business value. He brings more than 30 years of experience in AI, data, and digital transformation, having led large-scale consulting organizations for Fortune 10–1000 companies with P&L ownership of up to $125 million. A former Big Four partner, his work spans enterprise AI strategy and execution across regulated, data-intensive industries including pharmaceuticals, retail, and transportation.
Episode 2: Turning Consumer Goods Data into Real-Time Business Decisions – with Michael Finley of AnswerRocket
Guest: Michael Finley, Chief Technology Officer, AnswerRocket
Expertise: Enterprise GenAI Architecture, Agentic AI Systems, LLM Applications, Enterprise Software Engineering, AI Governance, SaaS Platform Development
Brief Recognition: Michael Finley is a technology entrepreneur with over 30 years of experience building enterprise software and AI platforms. He co-founded StellarIQ to help vertical SaaS companies become AI-powered market leaders, following nearly 12 years as Co-founder and CTO at AnswerRocket, where he led production-scale agentic AI systems for Fortune 500 organizations. His background includes senior CTO roles at NCR and Radiant Systems, multiple patents, and a career-long focus on engineering scalable, trustworthy AI for real enterprise environments.
Episode 3: Reducing R&D Cycle Time in Pharma Without Increasing Regulatory Risk – with Vaithi Bharath of Bayer
Guest: Vaithi Bharath, Associate Director of Data Science & AI Solutions, Bayer
Expertise: Regulated AI and Digital Transformation, Pharma R&D Technology Platforms, Clinical Trials Computing Environments, GxP/CSV Compliance, Cloud Architecture, Enterprise Integrations, Scientific Computing Systems
Brief Recognition: Vaithi Bharath is a technology leader with over 15 years of experience driving digital transformation in highly regulated environments, particularly pharmaceutical R&D and healthcare. At Bayer, he oversees validated scientific computing platforms for clinical trials, with responsibility for operational stability, compliance, and global transformation initiatives. His background includes enterprise integration and hybrid cloud leadership roles at IQVIA and Accenture, with a focus on building scalable, compliant systems that enable faster decision-making.
Scaling Decision Coverage Across Complexity
Across the podcast conversations with both AnswerRocket leaders, a shared diagnosis emerges: enterprise decision-making is constrained less by data availability than by human capacity to evaluate complexity at scale. In his episode, Jim Johnson, President at AnswerRocket Consulting, describes how expanding product portfolios, fragmented markets, and operational sprawl overwhelm analyst teams. Leaders are forced to focus on the most visible or urgent issues, leaving long-tail risks and opportunities unexamined.
Johnson argues that agentic AI systems help close the gap by scaling decision coverage across complexity, not by replacing human judgment. Rather than relying on periodic reporting, agents continuously monitor defined decision domains and surface decision-ready insights for human validation.
He insists to the executive podcast audience that doing so will shift analytics from an episodic, request-driven function to an always-on capability embedded in day-to-day operations. From Johnson’s perspective, agentic systems expand decision coverage by:
- Continuously monitoring products, markets, and operational signals beyond human capacity
- Detecting anomalies and deviations across large portfolios and fragmented environments
- Escalating only insights that exceed defined thresholds or require expert judgment
- Allowing teams to focus on validation and action rather than detection
The outcome is not simply faster decisions, but more decisions examined consistently across the business.
Where Johnson emphasizes coverage, Michael Finley, Chief Technology Officer at AnswerRocket, focuses on what makes that coverage sustainable. In his episode, Finley is explicit that agentic AI must be treated as enterprise software, not as loosely governed models or experimental automations. Without this discipline, trust breaks down as systems scale.
Finley highlights three engineering requirements for enterprise-grade agents: clear decision scoping, scoped access and controls, and continuous testing and monitoring. He articulates the following conditions to meet each requirement.
In clear decision scoping, Finley insists:
- Agents are designed around bounded decisions, not abstract optimization goals
- Thresholds and escalation criteria are defined upfront
To achieve scoped access and controls:
- Agents interact only with the data and systems required for their role
- Role-based permissions and approval gates are embedded into workflows
Continuous testing and monitoring:
- Performance and drift are monitored as conditions evolve
- Output sampling helps detect unintended behavior early
As Finley further explains about this process during his ‘AI in Business’ podcast appearance:
“If you treat agents as if they’re just clever models, you’ll lose control quickly. They have to be engineered like software systems — with defined objectives, guardrails, testing, and monitoring — because that’s the only way they earn trust at enterprise scale. Governance isn’t what slows agents down; it’s what allows them to operate safely across the business.”
— Michael Finley, Chief Technology Officer at AnswerRocket
In his framing, governance is not an obstacle to scale — it is the prerequisite.
The need for disciplined design is even more acute in regulated industries, a point also emphasized by Vaithi Bharath, Associate Director of Data Science & AI Solutions at Bayer. In his episode, Bharath explains that AI adoption in pharmaceuticals is often slowed not by resistance to AI, but by the burden of validation, documentation, and regulatory review.
He describes agentic AI as a guided decision accelerator rather than an autonomous decision-maker. Instead of automating high-stakes decisions end-to-end, agents support the workflows surrounding those decisions while preserving human accountability in the following four key applications:
- Pre-screening data for quality and readiness before formal review
- Flagging inconsistencies or outliers requiring expert attention
- Generating draft rationales and validation documentation
- Automatically capturing decision lineage and approvals
The approach Bharath describes here shortens validation timelines and improves consistency without compromising auditability. AI systems structure and document information; humans retain final authority and responsibility.
Engineering AI Agents with Enterprise Governance
Michael Finley warns that AI agents quickly become operational liabilities when they are not engineered with enterprise governance from the start. As agents move beyond pilots into production data and workflows, informal oversight breaks down, turning what looks like a scaling challenge into a trust problem. Leaders hesitate because they cannot clearly explain what an agent can do, what it can access, or how failures will be detected.
In turn, Jim Johnson reinforces that this breakdown often begins with poor definition. When agents are framed in capability-first terms — such as “optimization” or “reasoning” — their scope becomes unclear, making governance difficult. In Johnson’s view, effective oversight starts by defining the specific decisions an agent supports, not by expanding its technical sophistication.
He further argues that enterprise agents should instead be anchored to specific, bounded decision responsibilities, including:
- Flagging demand deviations that exceed approved tolerance ranges
- Identifying pricing anomalies that require human review
- Escalating operational risks based on predefined criteria
- Preparing structured summaries for executive or analyst decision-makers
By framing agents around decisions rather than intelligence, organizations gain predictability. Johnson underlines that leaders must understand what the agent is responsible for, operators should know what to expect, and risk teams can evaluate impact without reverse-engineering behavior from model outputs:
“If you can’t clearly articulate what decision an agent is responsible for, you can’t govern it. Governance starts with saying, ‘This is the decision space, this is where the agent helps, and this is where humans stay in control.’ Everything else flows from that.”
— Jim Johnson, President at AnswerRocket Consulting
Building on this decision-centric framing, Michael Finley stresses that access control must be designed into agents from the start, not layered on after deployment. As agents become more capable, teams are often tempted to grant broad access for flexibility. In practice, this dramatically increases risk while delivering marginal additional value.
From Finley’s perspective, enterprise-grade agents should:
- Access only the data sources required for their defined decision scope
- Operate under role-based permissions aligned with existing enterprise controls
- Escalate actions beyond thresholds rather than executing autonomously
- Produce immutable logs of inputs, outputs, and access events
The approach the AnswerRocket CTO describes above limits ‘blast radius’ and simplifies auditability. Just as importantly, it reassures stakeholders that agents are operating within known, enforceable boundaries.
Finley is also explicit when he asserts that governance does not end at deployment. Because agents rely on evolving data and models, static validation is insufficient. Trust must be maintained continuously through operational discipline. He highlights several practices that distinguish scalable agent deployments:
- Pre-deployment testing across expected and edge-case scenarios
- Regression testing to ensure updates do not introduce unintended behavior
- Output sampling to detect degradation or misalignment early
- Ongoing performance monitoring to surface drift and failure patterns
Finley also cautions that without continuous monitoring, organizations fall into a familiar pattern: early success, followed by unexplained anomalies and eventual disengagement as trust erodes. In his view, monitoring is what enables teams to intervene early and maintain confidence as agents scale.
Jim Johnson adds that governance is not just about reducing risk, but about enabling alignment across the enterprise. Clearly scoped, monitored, and auditable agents are more likely to be adopted across functions, while informal governance keeps usage fragmented.
Accelerating Decisions in Regulated Environments Without Sacrificing Accountability
In regulated industries, AI adoption is constrained less by model performance than by the processes surrounding decisions. Vaithi Bharath explains that in pharmaceuticals and related sectors, validation, documentation, and review requirements often extend timelines regardless of how accurate an AI system may be. These constraints exist to protect safety, compliance, and accountability — not to slow innovation.
Bharath argues that problems arise when AI systems ignore how regulated decisions are actually made. Black-box outputs that cannot be interrogated force teams into manual rework that increases scrutiny and delays adoption. AI accelerates decisions in regulated environments only when it supports review and validation workflows through guided explainability, shifting its role from decision-maker to decision accelerator and enabling faster, more confident human approval without lowering standards.
Drawing from his experience at Bayer, Bharath highlights four critical business areas where guided explainability delivers measurable gains:
- Pre-validation screening: AI systems assess data quality, completeness, and consistency before formal review, reducing the volume of issues discovered late in the process.
- Structured rationale generation: Instead of producing opaque scores or predictions, AI outputs are accompanied by clear indicators of contributing factors, assumptions, and constraints.
- Automated documentation support: Decision inputs, model versions, and intermediate outputs are captured automatically, minimizing manual documentation effort.
- Consistent application of checks: AI applies the same validation logic across cases, reducing variability that can trigger additional scrutiny from reviewers or regulators.
According to Bharath, these capabilities do not eliminate human review, but they compress the time between insight generation and confident approval.
A recurring concern in regulated environments is that increased automation can obscure accountability. Bharath is explicit that guided explainability works only when human responsibility remains clear and enforceable. AI systems may support analysis and documentation, but final decisions must always be attributable to accountable individuals.
He stresses that effective systems make guided explainability explicit via the following design attributes:
- Human approvals are required at defined decision points
- AI-generated recommendations are logged separately from human judgments
- Decision lineage captures who reviewed, approved, or rejected each outcome
- Audit trails are immutable and accessible for internal or external review
The clarity Bharath describes from the explainability process is what allows AI to move faster without increasing regulatory risk. When accountability is preserved, he notes, regulators and internal risk teams are far more willing to engage constructively with AI-supported processes:
“In regulated environments, speed doesn’t come from skipping steps — it comes from structuring them better. When AI helps you surface the right context, document decisions as they happen, and make reviews more consistent, you can move faster without losing control or accountability. The real acceleration happens when reviewers spend less time reconstructing decisions and more time evaluating them.”
— Vaithi Bharath, Associate Director of Data Science & AI Solutions at Bayer
Bharath challenges the assumption that regulation and speed are inherently at odds, noting that delays often stem from rework and inconsistency rather than the rules themselves. Guided explainability reduces friction by making decision context structured, traceable, and available from the outset.
Michael Finley’s insights also reinforce Bharath’s view, emphasizing that predictable, reproducible AI behavior is what builds trust and accelerates approvals in regulated environments. Though for leaders in pharmaceuticals and similar sectors, Bharath’s message is clear: AI should be designed to fit regulated workflows, enabling faster decisions without sacrificing accountability.
Bharath and Finley, respectively, insist that organizations that succeed with the approach they describe are able to:
- Shorten validation and review cycles
- Reduce manual documentation burden
- Improve consistency across decisions
- Preserve trust with regulators and internal stakeholders
In regulated environments, acceleration does not come from removing oversight. As Bharath makes clear from the experience he describes during his podcast appearance, it comes from embedding explainability and accountability into AI systems by design, allowing decisions to move faster precisely because they remain transparent and defensible.
Source link
#Governing #Agentic #Enterprise #Scale #Insight #Action #Leaders #AnswerRocket #Bayer
























