This interview analysis is sponsored by Moody’s and was written, edited, and published in alignment with ourEmerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
High-stakes enterprise decisions are growing too complex for traditional automation to handle as fragmented data and regulatory pressures overwhelm human teams. Agentic AI, capable of orchestrating multi-step workflows across tools and data is emerging as a solution that enhances productivity, maintains compliance, and supports human judgment.
Research compiled by the Financial Stability Board explains that the financial sector is increasingly adopting AI and GenAI tools for internal operations and regulatory compliance, and use cases are diversifying beyond basic tasks.
The paper likens this adoption to advances in deep learning, big data, and computational power, which enable the handling of larger and more complex datasets.
A 2025 report from Moody’s highlights how AI adoption in risk and compliance has surged, with 53% of teams actively using or trialing AI to manage fraud detection, KYC, and other regulatory functions. Yet, it also underscores the need for oversight, auditability, and human governance when deploying these technologies at scale.
As articulated in a recent article in Nature, AI agents are increasingly used to handle complex, multi-step processes that traditional AI or humans alone struggle to manage. They can plan, execute, and adapt workflows autonomously, which is critical in high-stakes, data-intensive environments such as finance or research.
These developments reinforce the need for agentic AI in enterprises where decision complexity, regulatory requirements, and data fragmentation make conventional processes inefficient or risky.
In a recent episode of the ‘AI in Business’ podcast, Emerj Editorial Director Matthew DeMello was joined by Pavlé Sabic, Senior Director, Generative AI Solutions and Strategy at Moody’s, to discuss how agentic AI can augment complex enterprise workflows to ensure efficiency, compliance, and human oversight.
Their conversation highlights two critical insights for enterprise adoption of agentic AI:
- Accelerating decisions with human-in-the-loop agentic AI: Pairing humans with agentic AI to streamline high-risk workflows and keep accountability with experts.
- Driving compliance with proprietary AI data: Leveraging proprietary, client-specific data to power agentic AI and ensure reliable, auditable, and regulation-ready enterprise workflows.
Listen to the full episode:
Guest: Pavlé Sabic, Senior Director, Generative AI Solutions and Strategy, Moody’s
Expertise: Generative AI, Commercial Innovation, Agentic AI
Brief Recognition: At Moody’s, Pavlé bridges insight, innovation, and implementation, shaping how institutions apply AI for real-world impact. A seasoned product leader and business strategist, he previously drove double-digit growth for S&P Global’s $140 million professional-services segment and led the commercial strategy for its Insights division. He holds a Master’s degree in Finance and Investment from the University of Edinburgh Business School.
Accelerating Decisions With Human-in-the-Loop Agentic AI
Pavlé opens by highlighting that most agentic AI adoption today is concentrated in low-risk, customer-facing workflows such as handling queries or booking appointments. Higher-stakes use cases, like credit origination, remain human-led, with AI used to assist rather than replace judgment.
In large banks, agentic systems are consolidating industry data, market trends, and company news into structured workflows that mirror existing origination processes while analysts supervise the outputs and make final decisions.
He asserts fully autonomous agents that can independently produce credit memos, assess risk, and recommend next steps are not yet in widespread use. Instead, he says, institutions are taking a cautious approach, using AI to assist rather than replace human judgment.
Pavlé tells the Emerj executive podcast audience that human-in-the-loop approach has reduced time to production by around 60%, says Pavlé, improving productivity by delivering information in a more concise, decision-ready format rather than automating risk assessment itself.
To explain agentic AI further, Sabic uses a credit memo as an example, noting that it has many components: a sector overview, financial decomposition, organizational strategy, implied probability of default, and so on.
Historically, he says, this process was considered fairly streamlined: teams would use different tools, brought information together, and relied on Microsoft Word and other administrative and efficiency tools to produce the report. Now, with the agents, it truly is streamlined. As Pavlé describes the typical scenario, “You can tell them the 10 different sections required, in a particular format, with bar graphs, etc., and the agents can go off and do that.”
Driving Compliance with Proprietary AI Data
Pavlé also notes that proprietary data is the foundational context for an AI agent and is absolutely critical. He explains that the approach relies on utilizing a vast in-house data set, including credit ratings research, graphics, and other risk-focused content, which serves as the foundational knowledge base for agents to draw from and execute client-specific workflows.
He points to the launch of a research assistant in December 2023 as an example. That tool combines proprietary credit research and analytics with generative AI capabilities and functions as an LLM-based assistant for credit workflows. According to Sabic, it enables users to process 60% more data, reduce tasks by about 30%, and handle significantly larger content volumes.
When it comes to agentic solutions more broadly, he says, combining world-class data, the most advanced AI technologies, and decades of specialized industry expertise makes it possible to deliver multi-agent workflows that can withstand regulatory scrutiny. In his words, this is the closest the enterprise world has come to magic — but the real magic happens when AI can pass an audit test.
That, Sabic emphasizes, is where proprietary data comes in. Rather than pulling from disparate sources on the internet, the data is specific to the client, industry, and sector.
He also talks about implementation and adoption as distinct, but the lines are blurred by the nature of the technology itself. With natural language prompting, tools that once required a coding background and deep technical expertise can now be used more broadly. As a result, workforce readiness is one of the biggest challenges.
Sabic notes that this is where the prioritization of training and change management must be considered. What is required is a human-in-the-loop approach, i.e. automation with oversight. Financial institutions, he cautions, should not treat AI agents as completely autonomous decision-makers, but rather as powerful assistants that require supervision, validation, and control. The goal is to enhance productivity and drive better outcomes, rather than replace workers.
There is still an implementation phase that involves orchestration layers, technical complexity, and the broader tech stack, but Pavlé stresses that institutions also need a workforce that understands how to use these tools. He describes the industry as bifurcating, with regulated and non-regulated sectors using LLMs, generative AI, and agentic solutions differently.
Non-regulated industries, he says, can get away with using off-the-shelf LLMs. Regulated industries, by contrast, must be far more careful due to audit requirements, security concerns, and the switching costs associated with LLMs. Every time a new model comes out, organizations may need to update their stack and re-implement systems.
“Enterprise grade orchestration is absolutely key. Scaling effectively requires platforms that can manage and coordinate AI agents across systems, and this needs to be centralized to avoid silos and ensure consistency. Successful adoption hinges on understanding the actual workflow for this particular use case in a financial institution origination, for example.”
– Pavlé Sabic, Senior Director, Generative AI Solutions and Strategy at Moody’s
Source link
#Manual #Reports #Generative #Agentic #Automation #Finance #Pavlé #Sabic #Moodys

























