The application layer contains our own low-code/no-code Agentic Automation Platform for use and configuration via the UI. There, you can configure specific use cases and applications, test them in the UI, and make them available for integration into your workflow tools via API if desired.
The trust layer of the stack includes an enterprise-grade authorization concept that allows the objects created in the foundation layer to be managed and assigned accordingly. Logging, compliance gateways, and our conclusion machines are also located there as basic components.
The stack is built directly on the infrastructure level with Kubernetes & Object Storage and is supplemented by PaaS components. These include the processing, management, and configuration of proprietary knowledge databases, connection to third-party systems, and integration of language models.
We possess all the skills and foundational technologies to develop Cognitive Intelligence without any third-party dependencies. Our Cognitive Control Unit forms the structural backbone to transform the raw thoughts of language models into powerful and controllable AI systems. While we generally prefer to work with standard models, we can, when necessary, generate our own refined training data with SynthIOS and apply it to open-weight foundation models using LoRA Finetuning.
Aside from creating our own foundation models (an endeavor for which we see no strategic necessity, given the availability of sufficiently powerful open-weight models with permissive licenses), we thus possess all the essential capabilities and tools to independently advance Cognitive Intelligence as the global scaling path for Powerful AI.
To make thinking and action-oriented AI strategically controllable, we have reimagined it’s fundamental architecture—not in models, but in systems. We achieve this by placing our proprietary Cognitive Control Unit alongside language models as a “thinking center” that deconstructs, analyzes, and verifies the model’s hypotheses. The result is logically traceable and well-reasoned thought paths.
To make thinking and action-oriented AI strategically controllable, we have reimagined its fundamental architecture—not in models, but in systems. We achieve this by placing our proprietary Cognitive Control Unit alongside language models as a “thinking center” that deconstructs, analyzes, and verifies the model’s hypotheses. The result is logically traceable and well-reasoned thought paths.
The quality of training data has a decisive influence on the behavior of AI systems. Instead of relying on external sources, we generate our own high-quality training data with our open-source pipeline, SynthIOS. This ensures that the underlying models operate on the best and most relevant knowledge base possible and are free from unwanted biases. We are continuously advancing our data pipelines.
To make thinking and action-oriented AI strategically controllable, we have reimagined its fundamental architecture—not in models, but in systems. We achieve this by placing our proprietary Cognitive Control Unit alongside language models as a “thinking center” that deconstructs, analyzes, and verifies the model’s hypotheses. The result is logically traceable and well-reasoned thought paths.
These three pillars—a revolutionary architecture, sovereign data generation, and the capability for precise and efficient finetuning—are more than just a collection of individual components: they merge into a seamless value chain for the future scaling of AI. Full access to these strategic tools puts us in a position to independently build AI systems that are not only powerful but also transparent, secure, and tailored to your needs from the ground up—without any technological dependencies on third parties. In geopolitically turbulent times, this is a critical aspect of sovereignty.
To further advance our mission, we are focused on optimizing the central levers of our technology. In doing so, we are pushing the boundaries of what is possible, without ever losing sight of security and reliability. At our subsidiary ACSL, we conduct fundamental research to evolve today’s model-centered AI architectures into architecturally grounded “thinking machines.” This effort is based on our innovative Leibniz-von Neumann architecture—a specific design for composite system structures that leverages the strengths of language model technology while overcoming its structural limitations with respect to safety, reliability, and human control.
The quality of training data has a significant impact on the behavior of AI systems. Instead of relying on external sources, we generate our own high-quality training data using our open-source pipeline SynthIOS. This ensures that the underlying models operate on the best and most relevant knowledge base while remaining free from unwanted biases. We continuously refine and advance our data pipelines.
A controllable architecture deserves precise language. To ensure that our systems communicate not only with logical correctness but also with excellence in their specific domain, we adapt open-weight language models using LoRA (Low-Rank Adaptation). In addition, we are researching innovative methods to significantly extend the context window (by a factor of x2, x4, or even x8). In doing so, we are deliberately enhancing the capabilities of EU AI Act–compliant open-source models, making them suitable and adaptable for business applications.
Our research agenda pursues a clear, strategic goal: we are making controllable AI more intelligent, more accessible, and more powerful. By deepening the reasoning capability of the architecture, maximizing its efficiency on any hardware, and specifically enhancing the abilities of compliant open-source models, we are actively shaping the next generation of sovereign AI systems.
This is how we ensure that our technology is not only leading today but will also set the standards for trustworthy artificial intelligence tomorrow.