DeepSeek’s flagship chatbot took the world by storm at the beginning of this year. Its meteoric rise to the top of the app store wasn’t just hype. DeepSeek is the canary in the coal mine—it was the alert that continuing down the status quo path is the wrong move. That canary is tweeting loudly, offering a signal in the market that the future of AI is open source.
Across industries, enterprise companies are building AI and machine learning teams, with roles focused entirely on the usage and proliferation of deep learning models and tools. These teams all share a similar concern: Can we move fast enough?
Some companies will fall behind because they cannot keep pace with the latest developments and the rapid advancement of AI innovation. There may be excessive red tape or security, or too many legacy systems and disparate data sources to integrate. Maybe internal leaders simply don’t see the value of working so hard to stay ahead in an area where ROI is hard fought and can take a long time to prove out.
The only way for businesses to keep up and move fast enough is with open source.
Open source in intelligence-first applications
AI has moved into its next era. Foundational models have gone multi-modal. They can be large or small, open, composable, and, most recently, they’ve become more agentic. They’re increasingly showing progress in considering ideas, planning capabilities, and inferring human reasoning. Yet their rapid growth means users have to adjust quickly to successfully move into the next era. That means working with trusted models and collaborating across teams to align on reaching business goals.
As these models progress, we’re witnessing the birth of intelligence-first applications. Insight Partners defines these apps as those that position AI within applications as a true collaborator. Intelligence-first amplifies human reasoning rather than trying to mimic or compile it.
Intelligent-first apps are paving the way for the next stack, an evolution that includes foundational multi-modal models, ML/LLM ops, modern data fusion, and more. Insight Partners notes a handful of archetypes underneath the intelligence-first umbrella:
- Deep AI apps solve the more complex, domain-specific problems. Think Profluent’s OpenCrispr, which has trained an LLM on proteins and RNA to develop an open-source AI gene editor. This solution can help address healthcare challenges and improve research and development opportunities.
- Co-pilots are similar to co-pilots on a plane. These apps are designed to offer support and aid decision-making while the primary pilot—in this case, the user—remains in control. NormAI is an example of a co-pilot, automating compliance analysis through AI agents. Regulatory compliance can be a tricky subject with dense language, so having a co-pilot to help navigate is a valuable tool.
- Autopilots are designed to work entirely independently. And it’s not just the basics. Autonomous intelligence in this setting might pull out and analyze key information from omnichannel customer support conversations or even call customers without picking up a phone.
- AI coworkers work alongside humans; this is the first glimpse of actual collaborative intelligence. These AI coworkers have the ability to reason and have cognitive outputs as they learn from the world around them. An example of this application is a virtual accountant that can organize all financial data asynchronously and autonomously.
- AI + human work fabric is the next frontier, and it will redefine how computing integrates and maintains human and AI collaboration. Maybe you’ve seen Matthew McConaughey in a variety of Agentforce ads—that’s one instance of this new work fabric where AI will interact and collaborate with human teams. This level will likely include a shift in Software-as-a-Service models, as we reevaluate how to store, manage, and analyze data and records as humans and AI work more closely together.
With how rapidly these shifts are happening, open source is the only way to stay ahead. Nothing else offers the speed and flexibility or the ability to iterate and experiment. It removes those hurdles that often come with lengthy purchase orders or negotiations. The open-source community simply want products to work effectively, and its collaboration delivers quick, impactful results. While the foundations within these models are a great baseline, open innovation will help put them over the top, benefiting all parties involved.
Putting AI to work for you
Last year, only 10% of generative AI models were domain-specific, relating to a particular industry or business function. Per Gartner, that number will rise to more than 60% by 2028. Similarly, we’ll move from 5% of virtual assistants using a domain-specialized language model to 95% of VAs doing the same in 2030.
That growth is a strong indicator of AI’s highest value. These tools are at their best when designed to accomplish specific tasks, actions, or goals.
When implementing new projects or initiatives, hone your efforts on these key areas to bring open-source into your AI tech stack without creating chaos. For instance, in my role as CPTO, we’re using AI to drive efficiencies across every function. It’s helping enhance workflows in JIRA, writing product requirements documents, and aiding in research. We’ve set up clear tasks for our tools to accomplish. We’ll double down on what works and remove what distracts us from our business goals.
Have a clear use case of what you’re solving for
AI requires a lot of experimentation. Just like many organizations underestimate how long planning and design take, many believe that standing up an AI project can be done relatively quickly, and that’s typically not true.
However, having a clear use case of what you’re solving for can help. What do you hope to accomplish, and why is AI the best tool for that?
In many situations, AI can help bring disparate customer data or disjointed services together to deliver stronger impact. I’ve also seen teams utilize it for tooling and iterating. A team member might use GitHub Copilot to say what interface they want and build an application from that. Data scientists shouldn’t also have to be engineers, and AI is making it easier to improve internal work, as well as external outputs like customer engagement.
Heathrow Airport is an excellent example of AI services streamlining customer engagement. With 14 websites and 45 back-end systems, managing all the airport’s data was tremendously difficult. Heathrow moved its systems into one platform while offering multiple touchpoints for customers, from online forums to OpenAI chatbots. The bots addressed thousands of extra questions per month, dropping employee call time by 27%.
If that sounds like a larger undertaking than you’re ready for, look for quicker wins in implementing AI. For example, try simplifying more complicated company language for sales enablement tools or use AI to inform management decision-making by classifying employee metrics and security tool data.
Internally aligning on the end goal of any AI implementation makes measuring success that much easier, as well. Being able to clearly show the results of an investment can lead to more internal buy-in and innovation.
Realize there’s no perfect organizational structure
Say it out loud with me: “No org structure is perfect.” Doesn’t that feel good to admit?
The most innovative AI ideas often get stuck at the intersection of organizational misalignment. If AI teams are experimenting in their own silos and product teams are marching to a different roadmap, there will be regular traffic jams (and the ensuing frustration that comes with them).
I believe there has to be some form of centralized AI within the company. However, the technicalities behind how you shape that process are less important than the alignment across teams. The key to any successful organizational structure is collaboration and communication.
It’s no coincidence that those are two of the foundations of open-source platforms.
Develop checks and balances
Having guardrails helps ensure these tools are approved and trustworthy. Companies are hiring AI specialists in increasingly more niche areas, such as AI ethics and AI compliance.
An organization should be aligned on the approved open-source tools or platforms and make sure employees understand its policies across the board. Know the data that’s going into a model and any additional sources it might be using. Too often, a company might just focus on the end result when the entire supply chain is relevant.
We also regularly update each other on model performance—if a tool isn’t serving your team well, it requires revisiting how you approach it (or sometimes, choosing another solution entirely).
Without these checks and balances in place, a company might have developers working on their local machines. When they want to push to production or runtime, they can’t do it because they’re using open-source software and packages that aren’t allowed.
Knowing who to trust
As of this writing, there are 1.7 million models on Hugging Face. Going through all those to find the perfect fit for your use case can feel a bit like trying to recover your favorite pair of sunglasses from the bottom of a lake. You might discover some interesting things along the way, but it’s a scary adventure when you’re blindly feeling around the water.
Security is the most common concern around open-source tools, and it’s where IT leaders will find the most internal pushback. Those concerns aren’t entirely unfounded. There are certainly products out there that can cause more harm than good, whether intentionally or by accident.
For example, the malicious package “dbgpkg” on Python Package Index (PyPi) found a stealthy backdoor under the guise of a debugging tool. And a California man was caught stealing over a terabyte of confidential data after hacking into a Disney employee’s personal computer. The culprit posted a computer program that purported to create AI-generated art—it was actually a malicious file that granted access to people’s computers when they downloaded the program.
Finding trusted platforms among these malicious actors will be even more critical, with an ongoing need for curated model repositories that can wade through all the available options. Anaconda believes that simplifying and streamlining are the best ways to accelerate AI initiatives. That means a unified platform that combines trusted distribution, simplified workflows, real-time insights, and governance controls. And the results speak for themselves: Forrester’s Total Economic Impact report found the security and governance controls in the Anaconda AI Platform offer a 60% reduced risk of breaches from addressable attacks.
When evaluating your enterprise platform options, look for both fundamental capabilities and tools and collaboration. How well does the platform align with your business goals? It should help improve operational efficiency and optimize your decision-making process.
You’ll also want a platform with features and functionality, such as data visualization, machine learning algorithms, and ease of use from multiple programming languages. Finally, the right platform should be interoperable with your existing toolchains, follows security best practices like user access control and encryption of data at rest and in transit, and can scale and evolve to shifting data volumes and needs.
Framework for accelerated AI value
The path to successful AI implementation follows a clear pattern we’ve observed across thousands of organizations:
- Establish the foundation with trusted, validated packages and artifacts
- Implement governance controls that balance security with innovation
- Build streamlined workflows with intuitive paths for practitioners
- Leverage actionable insights to continuously optimize your AI ecosystem
This framework ensures organizations can deploy with confidence, anywhere and everywhere, while achieving measurable ROI from their AI initiatives. By simplifying complexity and providing performance-optimized solutions for various workloads, enterprises can accelerate their AI journey without sacrificing security or reliability.
The best businesses don’t succeed with just one person. They thrive on innovation and great minds iterating with each other. That’s the crux of what open source was founded on—and it’s the recipe that makes it essential for the next tech stack.
Laura Sellers is the Chief Product and Technology Officer at Anaconda, where she leads the company’s product strategy and technological innovation. With over 25 years of experience in the technology industry, Laura has established herself as a visionary leader with exceptional expertise in scaling product and engineering teams.
Source link
#Open #Source #Longer #Optional #Work #Business