This interview analysis is sponsored by Filevine and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
In-house legal teams face a mounting challenge: the scale of digital information has outpaced their ability to manage it. Email, Slack, Teams, Zoom transcriptions, CLM systems, and AI note-takers have made legal work richer in data but poorer in focus.
Today, attorneys are spending increasing time finding information rather than applying judgment. Recent research from EY Law and Harvard Law School Center on the Legal Profession finds that in-house counsel now spend at least a fifth of their working hours on repetitive administrative tasks rather than strategic advisory work, leaving less capacity for high-value analysis.
Yet hiring, as reported by Thomson Reuters’ 2025 Legal Department Operations Index, is not catching up. 79% of surveyed corporate law departments reported increased matter volumes while headcount remained flat or decreased.
When extrapolated across enterprises, these challenges are certain to carry considerable business cost. Enterprise legal departments sit at the intersection of compliance, risk, and innovation — functions that depend on timely, confident decisions. When attorneys are forced into manual data wrangling or repetitive review cycles, delays cascade across product launches, contract approvals, and regulatory responses.
The result is not just slower legal work but slower business execution. As generative and analytical AI tools mature, legal leaders must determine how technology can accelerate decisions without undermining accuracy or accountability.
Recently on the ‘AI in Business’ podcast, Emerj Editorial Director Matthew DeMello spoke with two leaders addressing that imbalance from different angles: Ryan Anderson, CEO and Founder of Filevine, and Kevin Ahlstrom, Associate General Counsel for Patents at Meta. Both see responsible AI as a structural solution to the same problem — how to remove friction so human expertise can focus where it matters most.
Their conversations highlight two critical approaches for enterprise legal leaders:
- Unify fragmented legal data to elevate human judgement: Using AI to consolidate data across legal, finance, and product systems, giving decision-makers complete, context-rich insight without slowing operations.
- Automate repetitive legal tasks to unlock strategic capacity: Deploying responsible automation to accelerate patent review, portfolio alignment, and advisory work, freeing attorneys to focus on strategic outcomes.
Unify Fragmented Legal Data to Elevate Human Judgment
Episode: Overcoming Compliance Challenges in Legal AI Adoption – with Ryan Anderson at Filevine
Guest: Ryan Anderson, CEO and Founder of Filevine
Expertise: Legal Technology Innovation, Workflow Automation Strategy, SaaS Leadership
Brief Recognition: Ryan Anderson is the CEO and Co-founder of Filevine, a legal work platform helping law firms and in-house teams manage operations. A former practicing attorney, Anderson holds a Juris Doctor from the University of Utah’s S.J. Quinney College of Law.
Ryan begins by outlining the scale of the problem facing modern legal teams, explaining that the inundation of data from messaging platforms, emails, Zoom recordings, AI note-taking documents, and more can truly overwhelm them.
The core challenge, he continues, is not just access but structure. Fragmented systems scatter information across departments, leaving attorneys to manually reconcile multiple sources. Anderson argues that AI can become the organizing intelligence for this complexity, structuring and labeling data so decision-makers can focus on judgment instead of retrieval.
Anderson envisions a future where legal teams work from a unified data environment — what he calls a “single pane of glass.” In other words, rather than toggling among ten or fifteen disjointed tools, attorneys would see a consolidated workspace where contracts, communications, and financial data converge. That coherence allows AI to normalize artifacts, link related threads, and flag anomalies for review. When attorneys open a matter file, they do not see a haystack of documents, but the few items that actually require their attention.
Anderson notes that implementing this shift requires deliberate design. To achieve consistency and transparency, he recommends:
- Centralize context by consolidating data streams from legal, finance, and product systems into one accessible interface.
- Apply AI to normalize data, automatically merging or tagging duplicate or redundant records for deletion.
- Establish exception routing rules that push only non-standard or high-risk items to counsel for review.
- Measure results using decision latency and rework rates rather than the volume of tasks completed.
He cautions against seeing AI as an all-knowing replacement,
“Human judgment is, at least in our foreseeable lifetimes, irreplaceable. The finish line [for AI adoption in patent workflows] is when the signal from the noise gets separated in a highly consistent, highly reliable way for the human decision-maker. So when they get the information that they need actually to make a judgment on, they know that not only is that information accurate, but complete.”
– Ryan Anderson, CEO and Founder at Filevine
Anderson’s perspective reframes AI as a quality-control layer. Legal departments should treat confidence and traceability as first-class requirements; every AI-generated recommendation must include its confidence level and the sources it relied on. He likens this to scientific rigor: repeatable, transparent, and accountable.
To put this into practice, he advises that leaders:
- Define review thresholds that determine when automated recommendations can proceed and when human sign-off is required.
- Maintain audit logs of every AI interaction, including prompts, source material, and approval decisions.
- Institute data-quality checks so that systems surface fewer, more relevant alerts rather than more noise.
When applied consistently, these principles yield measurable change. Legal teams spend less time searching and reconciling data, and more time making defensible, insight-driven decisions.
For large enterprises, Anderson believes this discipline — structuring chaos before scaling automation — will separate those who use AI effectively from those who experiment with it. Ultimately, Anderson’s insight is about focus. By using AI to structure cross-channel data, organizations reduce the noise that clouds decision-making and empower legal professionals to act faster, with greater context and confidence.
Automate Repetitive Legal Tasks to Unlock Strategic Capacity
Episode: Practical AI for In-House Patent Legal – with Kevin Ahlstrom of Meta
Guest: Kevin Ahlstrom, Associate General Counsel, Patents, Meta
Expertise: Patent Portfolio Management, AI-Enabled Legal Operations, Intellectual Property Strategy
Brief Recognition: Kevin Ahlstrom is Associate General Counsel for Patents at Meta, overseeing patent strategy and portfolio development across key technology areas. Before Meta, he managed global intellectual property strategy at Novartis. He holds a Juris Doctor from Brigham Young University and a Bachelor’s in Electrical Engineering from the University of Utah.
Ahlstrom’s work at Meta echoes Anderson’s philosophy of augmentation over automation, but begins with a different constraint: time. Spending hours trying to decide which patents to keep and which to let go, he argues, will in the very near term be simplified by AI.
His first breakthrough came through a custom-built invention review tool. By feeding long, unstructured disclosures into the system, Ahlstrom receives readable summaries that distill essential ideas and potential risks. “I can take very complex writing and put it into the tool,” he explains, “and it’ll spit out something straightforward for me to understand. I can make these decisions a lot faster on whether or not we should file an invention or not.”
Secondly, Ahlstrom notes he uses AI as a collaborator rather than an assistant; a process of dialogue rather than delegation:
“I view it as an amplifier to my brain. I’m putting in shorter prompts, going back and forth, and it’s more of a conversation. Rather than ‘spit out this entire report for me,’ and then I just email it off, what it can do is say, summarize this report into three bullet points and help me understand what it’s saying.”
– Kevin Ahlstrom, Associate General Counsel in Patents at Meta
To make AI collaboration practical for invention review, Ahlstrom outlines a repeatable approach:
- Feed complete disclosures and related prior art references into the tool for context.
- Request concise summaries that reduce technical complexity to plain language.
- Extract claim deltas and key deviations automatically into structured tables.
- Route only exceptions — the disclosures that deviate from company norms — to attorneys for detailed review.
This structured process converts review time into decision time. The AI handles summarization and comparison; the attorney applies expertise. The result is speed without compromise.
Beyond invention review, Ahlstrom has extended automation into its portfolio strategy. He uses AI to scan public statements from executives — such as Mark Zuckerberg and Andrew Bosworth — and summarize Meta’s stated investment priorities. He then compares those insights to existing patent filings and asks the system to propose adjustments. Work that once took several days now takes less than an hour.
His approach allows legal to remain strategically aligned with business direction. Instead of reacting to filings, the legal team can proactively steer patent focus toward emerging technology priorities. For Ahlstrom, that’s what makes automation transformative: it changes not just workflow but influence.
Still, he insists that speed must never eclipse responsibility. Ahlstrom enforces strict input governance to prevent sensitive data from leaking into AI systems, even internally; before any data is entered, it undergoes manual review.
He also stresses the importance of defining acceptable accuracy levels. If AI is 80% correct, leaders must decide where that’s sufficient and where it’s not. He frames it as a policy question, not a technical one. To institutionalize this, he recommends:
- Categorizing workflows by risk, identifying which tasks can tolerate minor AI error and which require 100% accuracy.
- Applying confidence thresholds to each category, flagging results below acceptable limits for review.
- Keeping an auditable record of which recommendations were accepted, modified, or rejected to support transparency.
Looking forward, Ahlstrom envisions AI as the first stop for internal business clients seeking legal guidance. Routine inquiries could be triaged automatically, with attorneys focusing on high-value judgment calls.
But he is equally mindful of how this shift affects talent development. As AI absorbs repetitive tasks, younger lawyers may lose traditional training opportunities. His solution is to redefine junior roles around AI supervision — prompt design, model evaluation, and validation of system outputs — so they develop new skills while preserving legal reasoning.
Delivering this training with mentorship from more senior staffers and knowledge exchange between the two turns potential disruption into institutional renewal.
For Ahlstrom, the strategic value of automation lies in time reallocation. Each task the system handles expands the human bandwidth available for leadership, innovation, and advisory work. The shift is not about working less, but about working at a higher level.
While Anderson and Ahlstrom approach the problem from different angles, both point to a shared solution: automating the repetitive work that consumes legal teams’ time and attention. For enterprise leaders, the takeaway from both conversations is clear: automation is not about replacing attorneys but freeing them to focus on higher-value, strategic judgment.
- Start with high-volume, low-risk workflows where automation can safely accelerate output.
- Define accuracy, confidence, and review thresholds before allowing systems to operate autonomously.
- Measure impact by how much faster and more confidently decisions are made, not by model complexity.
- Rebuild junior roles around AI literacy and oversight, ensuring human expertise remains central to governance.
The differentiator, both argue, isn’t who deploys AI first but who automates best. Legal departments that use AI to eliminate manual repetition, structure data, and embed accountability into every output will scale human capability far faster than those treating automation as experimentation. By turning time saved into strategic focus, they move from reactive legal support to proactive business leadership.
Source link
#Building #Smarter #Legal #Departments #Responsible #Integration #Leaders #Filevine #Meta









