TAO TrueClaim™
Business
AD
Total Claims (15d)
847
In period
Flagged (model)
8.2%
Reviewed
0
By adjusters
Confirmed fraud (reviews)
0
From reviewed only
Claim ID CREATED DATE Risk Score Risk Band Human Label Assigned to Status
Dashboard
Monitor fraud detection performance and operational health.
📅 Filter by Created Date:
General Overview
Triage & Operations
TAO Trees Model Performance
Confirmed by claim adjuster
Human-verified fraud
Fraud count
0
Automatic by model
Model-flagged (RED) fraud
RED-flagged count
0
Daily high-risk flags (model) vs confirmed fraud (adjuster)
Total claims reviewed
0
Reviewed red-flagged
0
Reviewed amber
0
Reviewed green
0
Claims Reviewed by Claim Adjuster
HITL agreement by model band (reviewed vs agreed)
Average Review Time by Adjuster
Model performance vs ground truth (from claim adjuster reviews)
Based on reviewed claims only. Binary: model RED = fraud, model GREEN/AMBER = legit.
Precision (Hit Rate)
TP / (TP + FP)
Recall (Sensitivity)
TP / (TP + FN)
True positives (TP)
0
HITL Fraud, model RED
False negatives (FN)
0
HITL Fraud, model not RED
Confusion matrix
Actual →Fraud / Legit
Fraud
Legit
Pred RED
0
TP
0
FP
Pred not RED
0
FN
0
TN
Unreviewed claims: model triage
Pending human review by model band. When there are none, we still show “0 pending reviews”.
Unreviewed by model band (count)
Counts by band
Loading…
📋 Claim Summary
🟢 GREEN — Low risk
🔍 Explainability Decision tree path · leaf peer analysis
🟢 GREEN — Low risk
-
🌳 Decision Path — each card below is one split node the claim passed through
🕸️ Leaf Peer Radar — all features analyzed, each line = one claim reaching this leaf
Fraudulent claim
Non-fraudulent claim
✏️ Human-in-the-loop correction

Provide the correct label and a short justification. This will be used for model retraining.

🔗 Similar claims (same node)

Claims that followed the same path and reached the same leaf node.

Select Dataset
Dataset Overview

Select a dataset to view its information

Feature Schema
Feature Name Missing (Train) Missing (Prod) Example Values
Current Fraud Rate
8.2%
Fraud Rate Trend
Model Performance Metrics
Data Drift (MVP)
Status: — Last run: —
Rows analyzed
Warnings
Critical
Distribution (reference vs current)
Top drifting features
Feature Layer Metric Value Status
Run a drift check to see results.
Registered Models
Test Model with Custom Inputs

Same features as in the claims system. Fill the fields and click Score This Claim.

📋 Identification
💰 Financial
📉 Loss & Liability
🗂️ Policy & Category
🚗 Vehicle
👤 Claimant
📍 Location
Recent Activity
Rule Catalogue (Deployed Models)
User Access Management
User Role Last Active Status
Admin Admin 2025-26-03 14:32 Active
System Configuration
Using backend threshold settings.
System
Healthy
API + workers OK
Active Deployment
prod-motor-01
Last Dataset
run_0192
Artifacts Store

minIO

Quick Actions
Upload Dataset
Both a CSV file and a YAML transformation config are required.
Dataset Registry
dataset_id name row_count feature_count created_at actions
Create Preprocessing Version
All Datasets & Preprocessing Counts
dataset_id name preprocessing_versions_count latest preprocessing_id latest status latest created_at
ds_0f21 motor_claims_saudi_v1 2 pp_120a COMPLETED 2026-01-18
ds_77a8 motor_claims_es_v2 1 pp_88b2 COMPLETED 2026-01-06
Preprocessing Registry
preprocessing_id dataset_id strategy status output_feature_count created_at actions
pp_120a ds_0f21 robust COMPLETED 312 2026-01-18
pp_0c77 ds_0f21 baseline COMPLETED 280 2026-01-10
Launch a Training Run
⚙️ Advanced Configuration (Hyperparameter Grid YAML)
Leave empty to use the server default (train_coc.yaml). Paste a full YAML config below to override for this run only. Must contain license_key and hyperparams.
Run Status & Progress
Status
Select run
Progress
jobs done / total
Current stage
run stage
best_model_id
Runs by Dataset
training_run_id run_name status best_model_id created_at actions
All Runs
run_id run_name status progress best_model_id best_f1 started actions
No runs yet. Queue a training run to see results here.
Single-Model Test
⚠️ Upload raw, untransformed data
Upload the CSV in the same format as your original training dataset — with the same column names and raw values. The system will automatically re-apply the fitted preprocessing pipeline (imputation, encoding, scaling) before running evaluation. Do not pre-scale or pre-encode the file.
Multi-Model Comparison
⚠️ Upload raw, untransformed data
Each model carries its own fitted preprocessing pipeline. Upload the raw CSV (same format as training). The system re-applies each model’s pipeline independently before scoring.
Results
Predicted +Predicted −
Actual +tp 412fn 41
Actual −fp 89tn 3,458
Accuracy
0.97
Precision
0.78
Recall
0.91
F1
0.84
FP rate
2.3%
FN rate
9.2%
LLM Activation
Controls whether new claims generate and store AI-assisted narratives at inference time (default OFF). Existing claims are never re-evaluated.
Promote & Deploy
Rollback
Deployment History
event_id deployment_name event_type env model_ids preprocessing_id reason
▶ Run Inference
📦 Raw Features
🔀 Transformed Features
🔍 Explainability
✏️ Corrections (Relabel)
Create Inference
Trueclaim Raw Feature
transaction_idclaim_idstagereceived_atactions
Click Refresh or run an inference first.
Trueclaim Transformed Feature
transaction_idclaim_idstagefeature_contract_hash created_atactions
Click Refresh or run an inference first.
Trueclaim Explainability
transaction_idclaim_idpredpred_probaleaf_no deployment_idmodel_idpreprocessing_idcreated_atactions
Click Refresh or run an inference first.
Trueclaim Relabel — Human-in-the-loop Correction
Trueclaim Corrections — Stored Human Corrections
claim_idtransaction_idcorrect_labelstagejustificationlabeled_bylabeled_at
Click Refresh or submit a correction first.
Retraining Configuration
With new standardization, training is forced to New Training.
Recent Retraining Runs
Run ID Run Name Mode Standardization Status Created Actions
Loading...
Training Runs (MLflow-linked)
Candidate Models (MLflow-linked)
Registered Models
Open Alerts
Retraining Recommendation
Loading monitoring context…
NO DATA
Run monitoring to compute recommendation.
Model Registry Control
Select a model and register to MLflow Model Registry.
Candidate Metrics Comparison
Drift Alert Trend (Recent Monitoring Runs)
Production Performance Trend
MLflow-Linked Model Registry
model_id training_run_id mlflow_run_id registered_name version
Loading...
Open Monitoring Alerts
severity alert_type message created_at action
Loading...
Trace a Model
Model → training run → preprocessing → dataset version.
Example lineage:
model_id v3.3.0
run_id run_0192
preprocessing_id pp_120a
dataset_id ds_0f21 · version v5
artifacts leaderboards.json · coc_plots/ · rules.json · config.yaml
Artifact Downloads
All artifacts are immutable and checksum-addressed.
Create API Key
Keys are scoped to role and environment. Rotate regularly.
Users & Roles
Business users see operational screens; technical users manage lifecycle operations.
user role last_login actions
claims_manager business 2026-01-21
mlops technical 2026-01-21
Heads up. These actions are irreversible. Deleting a dataset cascades to every standardization, training run, model, leaderboard and artifact attached to it. Use the Force delete toggle only when you intentionally want to wipe active deployments, inference audit rows and retraining links that reference the target.
Datasets
Deleting a dataset wipes its raw CSV, every standardization, every training run, every transformation, and the MinIO prefix for that dataset.
dataset_id name rows standardizations disk size blockers actions
Loading…
Standardizations (Preprocessings)
Deleting a standardization removes its pipeline artifacts and every training run that depends on it.
preprocessing_id dataset_id strategy status training runs disk size blockers actions
Loading…
Training Runs
Deleting a training run removes its models, leaderboard, COC data and all other artifacts under the run directory.
training_run_id dataset_id preprocessing_id name status models disk size blockers actions
Loading…