This interview analysis is sponsored by Airia and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
Organizations are increasingly vulnerable to unintentional data leaks when sensitive information is shared with AI systems.
In 2023, the Economist Korea reported three separate instances where Samsung employees leaked confidential data to ChatGPT. In one case, an engineer pasted proprietary source code into the chatbot to check for errors. In another instance, an employee requested code optimization for sensitive code, and in a third, a worker uploaded a recording of an internal meeting to generate meeting notes.
All three incidents occurred within a short timeframe after ChatGPT was approved for use in the workplace. The incidents highlight how easily confidential information can be exposed when employees use public AI platforms for work-related tasks.
Additionally, many enterprises struggle to enforce proper access permissions on AI platforms, unintentionally granting excessive privileges. A recent “zero-click” vulnerability, named EchoLeak, in Microsoft 365 Copilot allowed attackers to extract sensitive data without requiring any user action. Such flaws demonstrate the critical need for strict, deliberate access management when deploying AI systems.
Emerj Editorial Director Matthew DeMello recently sat down with Kevin Kiley, President of Airia, to discuss how to safely implement and scale agentic AI in sensitive, high-stakes environments, such as finance and legal.
Their conversation highlights the importance of robust governance, effective risk management, and phased adoption to fully leverage the value of AI without compromising compliance or data security.
This article examines two key insights from their conversation for leaders across industries:
- Building confidence in agentic AI: Starting with small, low-risk use cases and keeping a human in the loop helps organizations build trust, meet regulatory obligations, and gradually scale agentic AI adoption.
- Operationalizing AI governance before expanding use cases: Embedding legal checks, access limits, and real-time safeguards early to enable safe AI scaling without risking control.
Listen to the full episode below:
Guest: Kevin Kiley, President, Airia
Expertise: Leadership, Sales, International Expansion
Brief Recognition: With over 20 years of experience, Kevin has led two organizations from start-up to achieving $100 million in revenue. In his last role as Chief Revenue Officer for OneTrust, Kevin had global responsibility for more than 14,000 customers of the company’s customers and $400 million in annual recurring revenue. He had previously been part of the early GTM leadership team at AirWatch (acquired by VMware for $1.54 billion in 2014), where he led the North American EUC Enterprise unit to over $350 million in revenue.
Building Confidence in Agentic AI
Kevin opens the podcast by explaining that agentic AI stands apart from traditional AI by its ability to operate with autonomy. Unlike earlier systems that follow strict, predetermined rules, agentic AI can make decisions and take action independently.
He acknowledges that while agentic AI offers tremendous potential and efficiency, it also introduces significant risk. Empowering AI agents means giving them access to sensitive data and credentials. Because of this, he says many organizations will adopt agentic AI gradually, keeping a human in the loop to build confidence in specific capabilities before moving further.
He also notes that in finance, regulatory requirements such as banking and privacy regulations may limit the amount of autonomy that can be granted. Referring to the GDPR in Europe, he cites Article 22, which prohibits fully automated decision-making in areas such as loan applications or credit assessments, requiring human involvement.
He emphasizes that organizations need to be aware of such obligations and carefully consider what they’re comfortable deploying, likely progressing into agentic AI cautiously and incrementally.
“My advice would be, again, start small, get those quick wins, find those internal use cases that we can start with, have the success of those projects lead into doing things that are more ambitious, getting a little bit bigger, broader audience, broader footprints, more systems connectivity.There is a framework for approaching that that we work with our clients on to help them get there. And then it’s also part of the ideation of looking at how there are so many things that we know we could do. Which ones do we start with? Which ones do we prioritize? Again, take that risk-reward matrix, and build from there into what you’re comfortable with, and then, and then, hopefully from there, keep progressing up.”
– Kevin Kiley, President of Airia
Operationalizing AI Governance Before Expanding Use Cases
Kevin explains that as organizations move into more sensitive use cases, it’s crucial to demonstrate that the proper steps have been taken to minimize risk. Here is a breakdown of the six steps he mentions:
- Start with Legal and Compliance Reviews:
Why
: Sensitive AI use cases often involve regulatory or legal constraints.What to do: Involve legal teams early to map out your obligations (e.g., GDPR, banking regulations).
Outcome: Ensure you can both explain and prove that proper safeguards were followed.
- Define Guardrails Early:
Why:
Many organizations struggle with access and permissions.What to do: Limit data and system access to only what’s necessary and avoid giving AI tools excessive authority (e.g., full access to HR systems).
Example: He refers to real cases where Microsoft Copilot revealed sensitive salary data due to improper access controls.
- Go Beyond Passive Monitoring:
Why:
Audit logs alone are not enough once something goes wrong.What to do: Build active countermeasures to detect and intercept risky actions before they happen.
Example: If someone uploads a financial spreadsheet to ChatGPT, your system should be able to block, strip, or mask sensitive data in real-time.
- Use Data Masking and Tokenization:
Why:
You still want to allow safe queries without exposing sensitive information.What to do: Mask or tokenize sensitive data before sending it to AI systems and rehydrate the data after the safe portion of the query is complete.
Benefit: Enables safe use of AI while reducing the risk of data breaches.
- Prepare for Security Threats:
Why:
Agentic AI systems are robust and, therefore, attractive to attackers.What to expect: More prompt injection attacks and model jailbreak attempts to extract training data or system access.
What to do: Implement defensive technologies that assume these threats will occur.
- Secure Agentic Frameworks from the Start:
Why
: New protocols, such as Anthropic MCP, enable system integration but often lack built-in security.What to do: Work with vendors or partners to embed strong authentication and access controls and treat agent communication channels as potential attack surfaces.
Kevin shares that, in the long term, financial services will experience significant innovation across departments, fundamentally changing how work gets done. He believes AI will allow teams to shift their focus from repetitive, routine tasks to more impactful, high-value problems.
He illustrates this with an example from a large financial institution where thousands of employees work in compliance, spending their days reviewing lengthy transaction portfolios—sometimes 500 to 700 pages long.
In some cases, as Kevin explains, an analyst might reach page 458 only to discover a critical issue that renders the entire transaction unviable, wasting days of effort. With AI, these document packs can be processed quickly, with significant problems flagged within minutes.
Such instantaneous document processing enables analysts to immediately determine whether to proceed, propose changes, or discard the transaction altogether:
“Similar work with a lot of other legal teams doing things like contract review, where we can learn a bank’s legal sort of playbook of what positions they’re willing to take and what their tolerances are. We’ll run through, you know, 1000s of agreements, and within seconds, be able to tell them which of these agreements are outside of those tolerances and even propose, again, compromised positions that maybe would bring them into compliance with what we would insist on.”
– Kevin Kiley, President of Airia
Source link
#Unlocking #Enterprise #Efficiency #Orchestration #Kevin #Kiley #Airia