Ohad Elhelo, Ori Cohen, Co-Founders
AI agents can either work on behalf of users or on behalf of entities. The two groups are materially different. Users want their problems solved, entities care how. Users appreciate creativity, entities want consistency. Users are okay with “please avoid…”, companies would rather launch nothing than prompt begging.
The two groups of agents require two different technologies. Agents operating on behalf of users are perfectly fine powered by an LLM. Agents operating on behalf of companies need to couple natural language with deterministic constraints. They require a neuro-symbolic approach.
When this insight isn’t clear, the sad path looks like this:
A team wraps an LLM in a prompt, connects it to a knowledge base, adds some retrieval logic, and launches a prototype. It falls apart in production because the underlying architecture was never designed to actually do things on behalf of a business. It cannot adhere to policies absolutely nor can it update the system of record reliably enough. Then they must choose between two bad options: routing every query between predesigned workflows (losing flexibility), or adding more exclamation marks to their LLM system prompt (losing control).
When you’re resolving a billing dispute or processing a cancellation, the last thing you need is more bullet points in your system instructions. Alternatively, manually building a workflow pipeline is the surest way to lose your faith in the machines, submitting your architecture to the questionably intelligent cognitive core of a state machine with ten modes and a few symbols.
The solution, of course, comes from neuro-symbolic computation that accomplishes both deterministic constraints and natural flexibility. You need the neural for language: fluency, ambiguity, the mess of real conversation. And you need the symbolic for guarantees: state, rules, enforcement. That’s the thesis behind Apollo-1. That’s what we’ve spent eight years building. That’s the third approach.
Most (all?) of the companies in the AI agent space are variations on the first two themes. Which is why, when we came across Quack AI, their thinking and approach was refreshing.
Quack AI took a different path to the same conclusion.
Nadav Kemper and Aviram Roisman founded Quack in 2023. Fifteen excellent people. They raised $7 million. And in about two years, they got to dozens of global paying customers across industries — Artlist, Yotpo, WalkMe, Hologram, others — with AI agents running in production, handling real customer service at scale.
They did it by building something non-obvious. Quack built a symbolic representation — in code — of the context their agents needed in order to handle complex, specialized topics, the kind of edge cases that make most systems fall over.
They took it as far as you can take it without a complete neuro-symbolic computation. Now they’re joining forces with us to power this for everyone, with Apollo-1 underneath.
We really liked the technical team, and their leader Aviram. The non-orthodox approach; the instinct to look at the dominant paradigm and say this isn’t sufficient and then actually ship something better. We liked that when we showed them Apollo-1, they immediately understood where it all goes from here.
Today we announced that AUI has acquired Quack AI. The entire Quack team is joining us in full force, doubling our Israeli R&D site in Tel Aviv. Quack’s existing customers will continue to receive full, uninterrupted service, now backed by AUI’s infrastructure, resources, and technology.
To the Quack Team — welcome to AUI. It may be a matter of weeks or it could take a few months. The neuro-symbolic way is the only way, and the deep learning-only mafia can stack soul.md files as high as they want. The robots don’t need a soul, they need a control panel.