This website requires Javascript for some parts to function propertly. Your experience may vary.

On the responsibility of all organs of the constitutional state to use AI, using internal investigations as an example | Hengeler Mueller News

Argus Eyes – The Blog on Internal Investigations, Crisis Management and Compliance

On the responsibility of all organs of the constitutional state to use AI, using internal investigations as an example

How AI is fundamentally changing compliance practice

From the functionality and practical success of large language models, other foundation models, and further concepts of generative AI arises a rule-of-law responsibility of all institutions/stakeholders of the constitutional state to use these models. When applied correctly, artificial intelligence leads to a better, faster, and more cost-effective determination or reconstruction of the facts to be legally assessed, an understanding of the abstract legal position, and at least in the medium term the subsumption of the facts under the relevant legal rules, thereby enhancing the enforcement and resilience of the rule of law. This is particularly evident in fields of law that are characterized by unknown complex facts and unclear and contradictory (and often cross‑border) legal situations.

One such field is compliance and internal investigations: artificial intelligence has already significantly changed the expectations the institutions of the constitutional state place on companies and their compliance departments. Where keyword lists and manual document review once dominated, AI‑enabled tools now analyze enormous volumes of data more comprehensively, quickly, accurately, and cost‑effectively. The spectrum ranges from retrospective analyses such as forensic accounting, SAP evaluations, or fraud and corruption detection to preventive compliance monitoring, vendor and partner due diligence, web screenings, as well as M&A and joint venture screenings. At the same time, companies face regulatory pressure, and liability risks are increasing, for example in acquisitions, exports, and technical cooperations. From today’s perspective, traditional manual compliance reviews are too incomplete, too slow, too unsystematic, and not scalable. This development is accompanied by a shift in the expectations of various external stakeholders within the rule-of-law framework: authorities, courts, policymakers, and business partners no longer view compliance AI merely as a technical add-on but increasingly as a standard instrument of a professional and effective compliance organization. This will lead to a de facto obligation to use appropriate AI tools within the bounds of what is legally permissible. In high‑risk areas, it is to be expected that fields previously interpreted restrictively (such as data protection and labor protection) will in the future be handled with fewer restrictions, without crossing the boundaries of what is permitted.

This affects retrospective compliance reviews just as much as preventive compliance measures. A central challenge at present lies in safeguarding companies’ capacity for innovation while simultaneously introducing appropriate, risk‑adequate compliance screenings. Companies also face the task of making the use of AI tools for compliance not only more efficient but also auditable, legally sound, and transparently documented. Conversely, risk‑based and targeted AI screenings can reduce the typical paralysis risks of classical large‑scale compliance (such as overblown rulebooks and prohibitions).

AI does not become effective for compliance purposes by itself. It is important to connect the right and especially relevant proprietary raw data sources with a software tool suited to the specific use case, based on precise, reproducible questions within sensible workflows. Effective use of AI for preventive compliance requires both a fundamental understanding of AI and, above all, domain ownership in the form of deep compliance and subject‑matter expertise, together with knowledge of the causes of non‑compliance across different compliance areas. A keyword‑based (first‑level) compliance review will gradually become dispensable.

What will not become dispensable are power users who are able to combine tools/workflows and raw data with meaningful questions. In addition, the results of AI‑supported reviews must continue to be validated by highly qualified subject‑matter experts. Classical reviewers or junior staff have a supporting role here (AI as the "first layer," humans as the "second layer"). Only a combined power‑user team of compliance experts and IT specialists can translate compliance requirements into tailored workflows. By doing so, organizations not only generate efficiency gains but also meet the rising rule‑of‑law expectations for responsible, traceable, and effective compliance.