As generative AI (GenAI) reshapes industries, family-owned, community banks must balance the opportunity to improve customer workflows with the need to maintain ethical standards, protect data privacy, and ensure regulatory compliance. This issue is particularly pronounced in the financial services sector, where trust is crucial and regulatory scrutiny is intense.
According to a 2023 report by McKinsey, AI has the potential to deliver an additional $200B to $340B in value across the banking industry, which equates to as much as 4.7% of the industry’s annual revenue. The stakes are particularly high for community banks because personal trust is paramount. New technologies present a significant opportunity, but along with that comes amplified risk.
Community banks serve local customer bases and have limited resources compared to global institutions, but they serve a vital role in their communities, as outlined in a recent report by the National Bureau of Economic Research. As a result, scaling AI responsibly is even more important for community banks to maintain customer trust and operational integrity.
Emerj Senior Editor Matthew DeMello sat down with Miranda Jones from Emprise Bank on the ‘AI in Business’ podcast to continue their conversation about how to scale responsible AI.
The following article will focus on three key takeaways from the conversation:
- Creating safe environments for AI experimentation: Providing employees with controlled spaces to explore AI tools, ensuring data privacy, and protecting proprietary data.
- Leveraging AI for unstructured data insights: Using GenAI to process unstructured data, enabling employees to improve clarity and efficiency in communication.
- Implementing domain-specific AI models: Prioritizing smaller, targeted AI models over broader foundational models to address the unique needs of community banking customers and ensure contextually relevant outcomes.
Listen to the full episode below:
Guest: Miranda Jones, SVP, Data & AI Strategy Leader at Emprise Bank
Expertise: Strategic Leadership, AI, Machine Learning
Brief Recognition: Before her current role at Emprise Bank, Miranda was VP of Predictive Analytics at Emprise. Previously, she was an Analyst at Spirit AeroSystems in procurement cost support and pricing and business analytics. She holds a Master of Science in Mathematics.
Creating Safe Environments for AI Experimentation
When asked about why it is critical from a data science perspective in the financial services space to create safe environments for employees to experiment with AI tools, Jones elaborates. According to Jones, it’s critical to enable employees to use GenAI tools now rather than waiting until they’re perfect since development takes time.
Jones explains that these spaces help employees build AI literacy by learning how to write effective prompts, interpret outputs, and avoid treating AI tools like search engines. Additionally, it helps employees understand risks by recognizing issues like bias, hallucinations, or misinformation so they can critically evaluate AI-generated results. The outcome is that employees can safely integrate AI into workflows and use it to enhance customer service, document processing, or internal operations without violating privacy or compliance rules.
Jones further explains that GenAI models are designed to “write words that sound like humans, not discern facts.” As a result, critical evaluation of output is essential to prevent misinformation that would undermine customer trust.
Leveraging AI for Unstructured Data Insights
Jones goes on to explain how GenAI excels at processing unstructured data, things like text in emails, Word documents, or PDFs. She highlights how AI can help employees approach data structuring and communication:
“For example, it may be more easily understood if, instead of having ten pages of verbose text in a document, really what needs to be communicated is 10 bullet points.
So, by them using GenAI and trying to learn things from a document and iterate with prompts, they may ultimately learn really what I needed to communicate wasn’t 10 pages. It was these five ideas in a concise way.”
– Miranda Jones, SVP, Data, & AI Strategy Leader at Emprise Bank
Implementing Domain-Specific AI Models
Domain-specific models are essential for addressing unique customer needs. Jones points out that even small variations in language, such as the difference between British and American English or local slang, can affect how customers communicate.
Jones argues that overly generalized AI models can fail to capture the nuances of specific customer segments or local contexts. She also points out that in some cases the words they use in a financial services context mean something very different in another industry. She used the analogy of Apple’s App Store to illustrate that specialized apps outperform apps designed to address too many purposes.
When asked about the advantages that regulated industries adopting AI at a deliberate pace have when scaling AI responsibly, Jones offers specific insight. She advises that when rolling out agents or other applications, companies should always start with a human in the loop and evaluate the process to determine if they could decouple the human.
In parallel, they should also consider whether they should design the process differently to scale AI while still fully benefiting from the technology. Jones believes companies should approach the conversation by identifying what the problem is and what they’re trying to accomplish, and determining if AI is the right tool for that, rather than insistently trying to find ways to use AI agents.
Source link
#Responsible #Scales #Customer #Workflows #Miranda #Jones #Emprise #Bank