AI that Acts must also be able to decide traceably
Artificial intelligence has made enormous progress in recent years. Language models (LLMs) create texts that are barely distinguishable from human language, and agent systems automate even complex processes. But with growing autonomy comes growing responsibility: systems that execute actions must ensure that their decisions are permissible, appropriate, and compliant with rules. This is precisely where the challenge—and the opportunity—lies: the transition from model-centric to system-centric thought structures.
So far, the discussion around AI safety has often focused on the model itself: Is it robust? Does it hallucinate? Was it trained fairly? These questions are important—but insufficient when it comes to systems that trigger concrete actions. Because there is a qualitative difference between a generated text and a real-world action. Texts can be misleading, but the resulting actions have real consequences—legally, economically, and ethically. Anyone who wants to use AI to control processes needs more than just intelligence in the sense of text generation. They need decision-making capability—specifically, while taking rules, norms, and contextual knowledge into account.
Why LLMs and Agents alone do not guarantee secure action capability
Language models like GPT, Claude, or Mistral are impressive. Yet their foundation remains stochastic: they operate based on statistical probabilities—not on the basis of structured logic or methodical judgment. They cannot reliably check whether a certain action is permissible in a given context. They can suggest, imitate, extrapolate—but not decide normatively.
Agent-based systems controlled by LLMs exhibit a similar problem: they operate in loops, observing their environment and planning next steps. But these steps continue to be based on stochastic
probabilities—not on a structured review of facts and constraints. This can lead to drifting behavior: even minor deviations in the input, context, or model state can lead to unpredictable or undesirable actions. In safety-critical contexts—such as in public administration, industry, medicine, or energy supply—this is unacceptable.
In short: what’s missing is an instance that explicitly checks actions for their permissibility—before they are executed.
Decision-AI as the Control Unit for Action-AI
The solution lies in a separation—and at the same time, a strategic connection—of two AI functions: decision and action. Action-capable AI—for example, in the form of agents—should not autonomously decide what it is allowed to do. Instead, it requires a dedicated Decision-AI that acts as an upstream control instance. Only actions that this instance deems permissible may be executed.
With e1, we introduce such a Decision-AI: it has deductive reasoning capabilities and can systematically apply rules, regulations, and context information to make decisions that are verifiable and secure. Unlike LLMs, e1 is not based on probabilities, but on logically structured reasoning—even with incomplete knowledge. This makes it the central control unit in agentic systems.
How e1 enables secure action capabilit
e1 checks intended actions not just syntactically or semantically, but logically and normatively. For example: an AI agent wants to approve an invoice. e1 checks against predefined rules (e.g., budget limits, four-eyes principle, supplier status) whether this action is permissible in the specific situation. Only if all conditions are met does e1 grant approval. Otherwise, it is denied or escalated to a human instance.
This architecture has several advantages:
- Prevention over Correction: Actions are checked before execution, not just evaluated in hindsight.
- Rule Transparency: Decisions are traceable because they are derived from explicit rules.
- Systematic Safeguards: Even with changing models or contexts, normative control remains constant.
- Auditability: Every decision can be documented and later reviewed—a crucial factor for compliance and governance.
Thus, e1 provides the logical instance that keeps agent behavior on a normative track. The result is an AI system that is not only intelligent but also controllable and trustworthy.
The Path to trustworthy Autonomy
Anyone who lets AI systems act autonomously without integrating an upstream decision logic risks unpredictability. Language models and agents are powerful—but not normatively reliable. The key to trustworthy, secure AI usage lies in the structured combination of decision-making and action capabilities.
e1 represents a paradigm shift: from probabilistic output generation to the normatively verified action. Not every conceivable action is permitted—only those that can be deductively derived from rules, context, and objectives.
In an era where AI is being embedded in social, legal, and economic processes, this controllability is not an option. It is a necessity—both strategically and operationally.