How do

Cognitive Conclusion Models

work in real-world applications?

The best way to understand the behavior of CCMs is through a concrete example where precise, consistent, and robust AI reasoning is essential—specifically across long, technically demanding chains of thought.

Case Study

Case Review for a Child Benefit Application (Kindergeld)

Important Note: This type of case review and reasoning requires adherence to numerous rules—creating long chains of reasoning. In the “token space,” these long chains lead to a combinatorial explosion—which makes handling such cases with standard LLMs a significant challenge in practice.

Task / Objective

Evaluate and resolve this case while complying with all Policies.

The son (19 years old) lives in his parents’ household. In February, he voluntarily terminated his vocational training (Retail Clerk) during the 3rd month of his apprenticeship. During March and April, he continued working at the same company on a temporary contract basis (marginal employment, 12 hours/week, €520). As of May, he is attending a private vocational school on a part-time basis (2 days of on-site attendance + 3 days of self-study) with the goal of subsequent retraining as an Occupational Therapist. The school is not officially recognized by the state but offers preparation for an external state examination. Child benefit is being applied for from March to July.

Case Description

The son (19 years old) lives in his parents’ household. In February, he voluntarily terminated his vocational training (Retail Clerk) during the 3rd month of his apprenticeship. During March and April, he continued working at the same company on a temporary contract basis (marginal employment, 12 hours/week, €520). As of May, he is attending a private vocational school on a part-time basis (2 days of on-site attendance + 3 days of self-study) with the goal of subsequent retraining as an Occupational Therapist. The school is not officially recognized by the state but offers preparation for an external state examination. Child benefit is being applied for from March to July.
– POL-101: Vocational training is present if the child is in an orderly training relationship—this includes school-based training. – POL-102: Dropping out of training ends the entitlement unless a new eligible measure follows immediately. – POL-103: A non-recognized school may exceptionally be taken into account if there is serious preparation for a state-recognized degree. – POL-104: Marginal employment does not affect eligibility, provided there is no full-time principal occupation. – POL-105: Part-time schooling forms can count as vocational training if regular instruction takes place within the core curriculum. – POL-106: A transition period of max. 4 months between two training sections is eligible for funding if the subsequent training is firmly planned and prepared.

Policies

  • POL-101: Vocational training is present if the child is in an orderly training relationship—this includes school-based training.
  • POL-102: Dropping out of training ends the entitlement unless a new eligible measure follows immediately.
  • POL-103: A non-recognized school may exceptionally be taken into account if there is serious preparation for a state-recognized degree.
  • POL-104: Marginal employment does not affect eligibility, provided there is no full-time principal occupation.
  • POL-105: Part-time schooling forms can count as vocational training if regular instruction takes place within the core curriculum.
  • POL-106: A transition period of max. 4 months between two training sections is eligible for funding if the subsequent training is firmly planned and prepared.

What is this example about?
It is about correctly solving a given task while observing normative constraints—i.e., regulations and rules. In this scenario, the system supports a public agency by performing a legal pre-assessment for a child benefit application. This requires step-by-step derivation, meaning real “cognitive work”.

The task demands a precise verification of entitlement periods against a complex policy logic specifically analyzing the transition phase between high school and university. A single false logical step—e.g., overlooking a 4-month deadline—leads to an incorrect notification of decision.

Important Disclaimer: This example is illustrative
The case described here and the subsequent explanations serve to illustrate the working method of CCMs. However, the functional principle of CCMs is in no way limited to “Child Benefit.”

CCMs operate domain-agnostically.
On the contrary: CCMs “think” across domains based on general world knowledge and universally valid deductive reasoning. However, individual, organization-specific knowledge can easily be connected as an external source—without the need for extensive training. If you already possess individually trained LLMs, we can integrate them precisely.

Contact our Solution Team, if you would like to know more about the different integration scenarios.

A Clear Input Structure:
The Foundation for High-Quality Reasoning

We have designed both the Conclusion UI and the Conclusion API to utilize a structured input schema. While not a strict technological necessity, this significantly facilitates the model’s performance in real-world scenarios AND simplifies handling and integration for human users.

Task / Objective

Case Description

Policies

For standardized operations, you naturally do not enter these Policies manually. CCMs are designed to integrate seamlessly into your existing knowledge architecture. Policies and facts (the “context”) are dynamically injected into the model:

The model uses these sources not merely as inspiration, but as binding constraints for the reasoning process.

Structured Thinking is Based on
a Step-by-Step Approach

Before a CCM starts the actual solution process, it thoroughly analyzes the task and constraints, and devises a potential solution path.
Unlike LLMs, which immediately start “writing away,” the CCM first analyzes the task structure and the logical dependencies of the policies.

Safety First:If the CCM determines that the available information or rules lead to a contradiction, it aborts the process in a controlled manner or requests clarification. The principle is: Better no answer than a hallucinated, incorrect answer.

The Directed Evolution & Validation
of the Reasoning Process

The Directed Evolution & Validationof the Reasoning Process

The solution does not emerge from a “Black Box.” The CCM decomposes the complex problem into atomic, logical units. Each step builds upon the previous one:
  1. Fact Extraction: What data is available for the period?

  2. Rule Application: Does POL-106 (transition period) apply in this specific month?

  3. Interim Conclusion: Status determined for subsection X.

  4. Validation: Does this step violate a policy? -> Block.

  5. Verification: Is the logical derivation valid? -> Accept.

This linear, causal sequence prevents getting “lost” in the token space, which often leads to unstable results with standard LLMs.

This continuous self-monitoring (Self-Correction) ensures that errors do not propagate through the chain. The result is a drastically reduced hallucination rate.

The solution does not emerge from a “Black Box.” The CCM decomposes the complex problem into atomic, logical units. Each step builds upon the previous one:

  1. Fact Extraction: What data is available for the period?

  2. Rule Application: Does POL-106 (transition period) apply in this specific month?

  3. Interim Conclusion: Status determined for subsection X.

  4. Validation: Does this step violate a policy? -> Block.

  5. Verification: Is the logical derivation valid? -> Accept.

This linear, causal sequence prevents getting “lost” in the token space, which often leads to unstable results with standard LLMs.

This continuous self-monitoring (Self-Correction) ensures that errors do not propagate through the chain. The result is a drastically reduced hallucination rate.

Full Transparency & Auditability:
Complete overview at all times!

Full Transparency & Auditability: Complete Overview at All Times!

In the end, you receive not only the result of the reasoning chain (including justifications) but the complete Audit Trail. You can trace exactly why the model decided as it did:

  • Which policy did it reference?

  • Which fact was decisive?

  • What was the logical chain?

This level of transparency and granularity makes AI decisions audit-proof and traceable—a must for regulated industries.

Why CCMs? Because "Probably Right" isn't Enough in Critical Application Scenarios.

LLMs are fantastic tools for creativity and assistance. But when it comes to robust conclusions in mission-critical business processes, high stochastic ambiguity within long chains of thought becomes an undesirable or even unacceptable risk.

CCMs address this “Skill Gap”: enabling conclusions that correspond to real cognitive work at the level of human experts—meaning with high precision, adherence to rules, and traceable justifications.

Now view the complete case, with all directly in the product:

You can now view the case shown above in its entirety, including every detail regarding derivations and justifications. No account or login is required. Simply click the link to start your journey of discovery.

Alternatively, you can create a free account for the Conclusion UI and take your first steps in a sandbox environment using your own case example.

If you need support, we are always here to help.

Mehr CCM-Action:

Schau' Dir gerne noch weitere Beispiele für CCM-Conclusions an

Unsere Conclusion Models bewähren sich bereits in einer Vielzahl produktiver Anwendungsfälle, bei denen es allesamt auf hochwertige und strukturierte Schlussfolgerungen mit entsprechender Transparenz und Nachvollziehbarkeit ankommt. Die nachfolgenden Beispiele zeigen einen Querschnitt über unterschiedliche Fachgebiete und Prozess-Gruppen im Unternehmen im Rahmen automatisierter Fall-Vorverarbeitung.

Das ist die Überschrift

Das ist die Überschrift

Das ist die Überschrift

Das ist die Überschrift

Das ist die Überschrift

Das ist die Überschrift

Das ist die Überschrift

Das ist die Überschrift

Structure through Architecture

Discover the inner workings of CCMs

How language and functional reasoning interact
The first AI models with a functional reasoning architecture

CCMs are not merely “better LLMs,” as they possess a distinct, secondary representation space.

Reasoning within the linguistic space is structurally limited: the longer the chains of thought, the more unstable the results tend to become.

The explicit reasoning architecture of CCMs enables the structuring and stabilization of long chains of thought, thereby increasing the quality of conclusions—especially when normative requirements and regulations must be observed.

Zugang zum Whitepaper

Gib deine Daten ein und erhalte sofort Zugang zu unserem exklusiven Whitepaper.

Deine Daten sind bei uns sicher.

Zugang zum Whitepaper

Gib deine Daten ein und erhalte sofort Zugang zu unserem exklusiven Whitepaper.

Deine Daten sind bei uns sicher.

Zugang zum Whitepaper

Gib deine Daten ein und erhalte sofort Zugang zu unserem exklusiven Whitepaper.

Deine Daten sind bei uns sicher.

Zugang zum Whitepaper

Gib deine Daten ein und erhalte sofort Zugang zu unserem exklusiven Whitepaper.

Deine Daten sind bei uns sicher.

Zugang zum Whitepaper

Gib deine Daten ein und erhalte sofort Zugang zu unserem exklusiven Whitepaper.

Deine Daten sind bei uns sicher.