SLB: Semantic causality and causal-AI, enabling discourse for trust in real-time root-cause analysis
Counterfactuals enable exploration of solutions in causal-AI. In addition to counterfactual exploration of a causal-AI network, we introduce search-and-summary to provide a rich discourse between the engineer and the AI system. Working in the area of operational risk of non-productive time (NPT), we begin from semantic cause-effect chains from root-cause-analysis. These are interwoven in a primarily knowledge-driven causal-AI structure. Population of a large subset of the initial conditional probability tables is through physics models.That partially trained structure is then refined on scenarios from historical operations and expert-driven hypothetical situations. In deployment a behaviour tree architecture enables a neurosymbolic combination of human and IoT establish the right risk model for the right context. The results are analyzed to identify the key paths through the causal-AI, recovering specific semantic cause-effect chains corresponding to root causes. These are used in two ways: firstly they can be passed for summarization by LLMs, where we use roundtrip verification and access to underlying human-verified text; secondly, the causal-AI result is used in a skyline multicriterion search across the historical and hypothetical cases. We present the details of the complete system and highlight the role that causal-AI plus semantic web can play in enabling discourse on operational risk.
-
Michael Williams Principal AI Research Scientist, SLB