It’s the problem that everyone with a passing knowledge of generative artificial intelligence (Gen AI) worries about: how to stop the technology going rogue and delivering bogus results. Enkrypt AI hopes it has an answer. The Boston-based start-up, which is today announcing the completion of a $2.35 million funding round, aims to stamp out bad behaviour by AI chatbots – swearing, making stuff up or even recommending competitors.
“There are so many enterprises out there that are excited by the potential of GenAI and large language models (LLMs) for their business but worried about going live because of how it will increase their exposure to risk,” says Sahil Agarwal, co-founder and CEO of Enkrypt.
Those anxieties continue to grow with as new – and very public – horror stories of GenAI going wrong surface. AirCanada, for example, has spent recent weeks defending a lawsuit from a customer who its chatbot wrongly informed would be entitled to a cheap fare following a bereavement; it lost. Chevrolet is just getting over the embarrassment of its chatbot being gamed to recommend vehicles from Tesla
TSLA
“We’re seeing this happen over and over again,” says Agarwal, who co-founded Enkrypt in 2022 with fellow Yale PhD Prashanth Harshangi. The two men have been working with LLM technologies for a number of years and point out that problems such as hallucinations – when chatbots produce completely erroneous answers – are not a new thing. “It’s just that the huge wave of interest in GenAI is really bringing these issues to the fore,” Agarwal says.
“As the benefits of AI become ever more tangible, so do the risks,” adds Harshangi, now the company’s chief technology officer. “Our platform does more than just detect vulnerabilities; it equips developers with a comprehensive toolkit to fortify their AI solutions against both current and future threats.”
Enkrypt’s approach has been to develop a piece of software that sits between a company’s LLM and its users – whether that’s customers, employees or another constituency. The software constantly checks the interaction between the LLM and the user with the aim of identifying problematic behaviours. The interaction can then be shut down before any harm is done.
The technology is based on “detectors” – tests that Enkrypt has designed into its software to hone in on particular issues. With hallucinations, for example, the software compares the answer from the LLM to a given question to the answers it has given when similar questions were asked in the past; where discrepancies are apparent, this suggests there is a problem. “Compulsive liars never give the same answer to a question twice,” as Agarwal puts it.
Importantly, moreover, the tests check interactions in both directions. That enables the software to monitor user behaviours too, seeking to identify attempts to breach security and other threats.
“Enkrypt AI is building solutions to address growing needs around AI compliance, privacy, security and metering,” explained analysts at security specialist Red Hat
RHT
One early Enkrypt customer, the CEO of a technology company building enterprise LLMs, makes the same point. “What’s most interesting for us is the security that our customers are concerned about and to be able to give them the ability to say, you know, we created a large language model,” he says. “We just like to audit its use and who has access.”
Enkrypt is at an early stage in its development but will shortly publish research showing that it can reduce problems at LLMs by a factor of 10. It hopes to build on these findings with a series of partnership projects at enterprises putting GenAI projects into the field, further developing the product in the process. While the company is currently pre-revenue, Agarwal believes it can reach an annual run-rate of $1 million in the next six months.
Not surprisingly, this is a field where plenty of people are focused on providing solutions, but Agarwal says Enkrypt’s approach will solve a much broader set of GenAI issues than the products that others are developing. “Enterprises tell us they don’t want a series of point solutions,” he says. “They want a single solution that will give them the confidence to deploy their GenAI applications and begin taking advantage of the undoubted opportunities.
Investors in the business are hopeful Enkrypt can steal a march. Today’s round is led by Boldcap, with participation from Berkeley SkyDeck, Kubera VC, Arka VC, Veredas Partners, Builders Fund and a number of angel investors from the AI, healthcare and enterprise sectors.
“Enterprise security is non-negotiable,” says Sathya Nellore Sampat, general partner at BoldCap. “With the explosive growth of GenAI and LLM usage within companies, the attack surface has dramatically increased. Enkrypt is the command centre to control, monitor and have visibility across GenAI initiatives.”
Source link
#Enkrypt #Raises #Million #Chatbots #Rogue
Unlock the potential of cutting-edge AI solutions with our comprehensive offerings. As a leading provider in the AI landscape, we harness the power of artificial intelligence to revolutionize industries. From machine learning and data analytics to natural language processing and computer vision, our AI solutions are designed to enhance efficiency and drive innovation. Explore the limitless possibilities of AI-driven insights and automation that propel your business forward. With a commitment to staying at the forefront of the rapidly evolving AI market, we deliver tailored solutions that meet your specific needs. Join us on the forefront of technological advancement, and let AI redefine the way you operate and succeed in a competitive landscape. Embrace the future with AI excellence, where possibilities are limitless, and competition is surpassed.