...

Transforming Compliance and Data into Competitive Advantage – with Leaders from OneTrust and RR Donnelly


This interview analysis is sponsored by OneTrust and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

Across industries, the same challenge continues to impede enterprise AI progress: data and governance systems built for a pre-AI world. Many organizations have accumulated decades of legacy infrastructure and manual oversight, each designed for predictable processes and static data models. Yet the modern AI enterprise operates on fluid, unstructured data — content that moves across teams, platforms, and jurisdictions faster than conventional governance can track.

The result is a widening execution gap. On one side are business leaders eager to deploy AI at scale; on the other are compliance, IT, and operations teams constrained by fragmented data, inconsistent KPIs, and outdated risk controls. Whether in financial services, manufacturing, or healthcare, this disconnect can stall innovation, inflate costs, and erode trust.

In fact, despite the vast amount of deployment taking place, research by Boston Consulting Group found that of 1,250 firms studied across sectors, 60% had yet to gain any material value from their AI projects and reported minimal revenue and cost gains. 

Findings from the OECD’s Adoption of Artificial Intelligence in Firms (2025) report paint a similar picture, reporting that fewer than 20% of companies have the data governance and infrastructure needed to scale AI effectively. This mirrors broader concerns that data maturity remains a critical bottleneck for enterprise AI adoption  

Recently on the ‘AI in Business’ podcast, Emerj Editorial Director Matthew DeMello hosted a special OneTrust-sponsored series to explore this challenge from two differing, yet complementary perspectives: Shane Wiggins, Director of Product at OneTrust, discussed how enterprises can embed governance directly into AI development workflows, while Andrew Deutsch, CEO of the Fangled Group and Director of Operations at RR Donnelly, offered the enterprise vantage point: how to transform operational data and metrics into AI-ready assets that accurately reflect business reality.

These conversations highlight four key steps leaders can take to align innovation with oversight and turn operational data into actionable insights:

  • Building self-service governance that accelerates innovation: Operationalizing compliance through automation enables teams to deploy AI quickly and securely without bypassing oversight.
  • Measuring governance impact through trust and transparency: Establishing clear metrics for governance — spanning cost, speed, and customer confidence — turns compliance from a cost center into a performance driver.
  • Converting fragmented operations into context-aware data systems: Creating contextual intelligence in data pipelines allows leaders to see divisions as they truly operate, reducing distortion from oversimplified KPIs.
  • Driving enterprise alignment and continuous adaptation with AI: Embedding adaptive forecasting and real-time feedback loops enables organizations to evolve beyond static planning and link AI directly to measurable business performance.

Building Self-Service Governance That Accelerates Innovation

Episode: Aligning Innovation with Oversight in AI Governance for Data Teams – with Shane Wiggins of OneTrust

Guest: Shane Wiggins, Director of Product, OneTrust

Expertise: Data Governance, AI Risk Management, Regulatory Compliance

Brief Recognition: Wiggins leads product strategy for AI governance at OneTrust, helping enterprises embed compliance and data protection frameworks directly into AI development. He has previously launched global IoT initiatives, expanded company portfolios, and held senior product and analytics roles at Accenture and UF Health. Shane holds a Master of Engineering in Computational Analytics from the Georgia Institute of Technology.

Wiggins begins by identifying a structural weakness common to most enterprises: attempts to retrofit static data governance frameworks onto dynamic AI workflows. Traditional systems were designed for structured databases and periodic audits, not for constantly evolving machine learning models dependent on unstructured data such as text, images, or sensor feeds.

This mismatch leads to limited visibility. “Without understanding what sensitive data you have and how it flows into models,” Wiggins explains, “you can’t really provide meaningful oversight.” He recalls a case where a large insurer discovered its claims model was ingesting unredacted medical documents from multiple regions — an immediate compliance exposure that existing controls failed to detect.

“It’s critical to ensure that you have the combination of both the ability to understand your data but also be able to enforce policies on top of that. Many organizations are still trying to retrofit data governance models onto AI workflows, which are way more dynamic and dependent on data… Without that visibility into what sensitive data you have and how it flows into models, you can’t really provide meaningful oversight.”

– Shane Wiggins, Director of Product at OneTrust

For Wiggins, solving these issues requires rethinking governance as a living process embedded in development. Instead of routing every model through slow manual reviews, enterprises can implement self-service governance portals that guide product teams through policy-based approvals. Low-risk AI use cases — such as deploying a pre-vetted model in a compliant environment — can be auto-approved, while higher-risk cases are escalated for review.

The result, says Shane, is that review cycles shrink from three weeks to hours; governance becomes an accelerator rather than an obstacle, with built-in automation provisioning the necessary infrastructure and access controls. “You want your policies and runtime controls to travel with the AI workloads,” he adds, “whether they’re running on OpenAI, Google Vertex, or within Salesforce.”

This model not only scales oversight but ensures consistency across business units. Compliance is no longer a checkpoint at the end of deployment; it’s an integrated part of the engineering workflow — visible to auditors, invisible to developers.

Measuring Governance Impact Through Trust and Transparency

Embedding governance into workflows is only half the equation. To gain executive support, governance teams must prove their impact in quantifiable business terms. Wiggins structures this around four dimensions: cost, speed, trust, and adoption:

  • Cost Avoidance: Every dollar invested in proactive AI risk management prevents multiple dollars in rework, fines, or crisis response later on.
  • Speed: Tracking time-to-approval — the interval between project proposal and production — provides a direct indicator of operational efficiency.
  • Trust: Transparent AI practices improve customer confidence and can even accelerate deal cycles as clients increasingly require visibility into how AI systems manage risk before transacting.
  • Adoption: Measuring how quickly customers embrace new AI-driven features after launch reflects the credibility of governance itself.

At the center of this transparency, Wiggins highlights the use of AI system cards — structured documentation detailing a model’s purpose, data sources, limitations, and governance context. These extend the idea of Google’s model cards beyond technical parameters to include policy alignment and business intent.

When governance is designed this way, it becomes measurable, repeatable, and communicable. Internally, it unites legal, risk, and engineering teams under shared objectives. Externally, it reassures clients and regulators that AI systems are both innovative and accountable.

From Wiggins’s perspective, the future of enterprise AI depends on balancing flexibility with control. Automation and compliance must move in tandem. “Governance should become invisible to developers but visible to auditors,” he says. Done correctly, the same mechanisms that enforce rules can also accelerate value creation to transform governance from a barrier into a business enabler.

Converting Fragmented Operations into Context-Aware Data Systems

Episode: Turning Operational Data into AI Ready Assets – with Andrew Deutsch of RR Donnelly and the Fangled Group

Guest: Andrew Deutsch, CEO, Fangled Group; Director of Operations, RR Donnelly

Expertise: Enterprise Operations, Data Strategy, Change Management

Brief Recognition: Deutsch oversees operational modernization and data strategy at RR Donnelley while advising global enterprises through the Fangled Group. He previously founded Fangled International, helping manufacturers enter global markets and generate sales, and has led executive growth initiatives at EcoTek Soft Wash, Smart Soda Holdings, and ATC Trailers.

Legacy observability tools and one-size-fits-all metrics obscure real performance, leaving many organizations unable to see operational outcomes clearly. Deutsch’s diagnosis is that this trap can misclassify top performers.

He warns companies from measuring everything by the same report card: in manufacturing, for example, a firm might evaluate all plants by output volume even though one may specialize in short-run, high-margin products requiring extra compliance checks. 

Without context, data paints the wrong picture and can misclassify top performers as laggards, “Dealing with multiple different outputs within the organization but expecting them all to perform to the same KPIs…is crazy.”

Deutsch recalls a packaging company that compared its pharmaceutical division to its standard production lines using identical KPIs. The pharma unit appeared inefficient until data systems captured FDA compliance requirements and specialized staffing levels. Once contextualized, performance metrics aligned with business value rather than raw output.

The key is contextual intelligence — configuring data pipelines and AI systems to recognize operational nuances across divisions, products, or regions. Differentiating data streams first allows enterprises to recombine insights coherently, avoiding false equivalencies that distort decision-making.

Modern AI platforms, Andrew continues, can now automate this process, segmenting data and adjusting weightings dynamically. The outcome is something of a holy grail: a unified yet nuanced enterprise view where every metric reflects its operational reality — a necessary step before scaling AI responsibly.

Driving Enterprise Alignment and Continuous Adaptation with AI

Beyond restructuring data, Deutsch emphasizes that most organizations still operate on outdated planning cycles. Forecasts often rely on last year’s assumptions, locking teams into static budgets and rigid quarterly targets. By the time new conditions emerge, plans are often already obsolete due to:

  • Competitor shifts
  • Supplier issues
  • Market volatility

AI can dismantle this rigidity. By integrating real-time sales, production, and market data, enterprises can update forecasts continuously, adapting headcount, resource allocation, and inventory decisions on the fly. “Forecasts are based on information that’s already old,” Deutsch says. “What if we could change that in real time?”

He envisions a shift from fixed monthly or quarterly reporting to floating performance windows — rolling four-week cycles where teams see current progress daily. This reduces the end-of-month ‘crunch’ that drives inefficiency, such as production surges or delayed orders.

For Deutsch, this is where AI’s opportunity shows: not just in prediction, but in continuous course correction. “If done properly,” he explains, “every morning we can see where we are within our four-week span and plan and work properly through it.”

At the organizational level, this adaptability also requires clarity of purpose. Enterprises must know their current state, define the future state they want, and ensure every AI investment serves that trajectory. “You have to know what you want it to do for you,” Deutsch stresses. “Otherwise, you end up getting help for the very thing you don’t need help for.”

“So much of the media talk of AI is about replacing humans with robots that are going to answer questions or automate things. My belief is that we should be asking how we make our people better at what they do. I have these conversations all the time and the bigger question is really defining what the business wants.” 

– Andrew Deutsch, CEO of the Fangled Group; Director of Operations at RR Donnelly

Finally, he warns against over-delegating transformation to IT alone. Governance and design should start with business decision-makers who understand the objectives. Once the model is clear, technical teams can operationalize it efficiently. “I’ve never had anything better done by IT guys than when we came to them with a true model of what we wanted,” he says.

Deutsch’s perspective reframes AI as an enabler of business truth rather than a technical add-on. Data modernization is not just about speed or scale — it’s about relevance.

Source link

#Transforming #Compliance #Data #Competitive #Advantage #Leaders #OneTrust #Donnelly