Research

Published Work

Published research for LKM products where formal validation is part of the product itself. Governance benchmarks, cognitive augmentation results, and related infrastructure work.

Governance Infrastructure

Defense in Depth

Joint Paper · March 2026

Defense in Depth: AI Governance Through Dual-Pipeline Architecture

Melissa K. Pinkston · LKM Constructs · Patent Pending

Combines PatternWall v5.0 pre-inference interception with Sensus v4.0 post-inference evaluation into a bidirectional feedback loop. Tested across Claude Opus 4.6, GPT-5.2, and Grok 4.1 on 35 adversarial red team turns and 120 CyberGym exploit tasks. Zero overlap between layers (BOTH_CAUGHT = 0) confirms complementary coverage across non-overlapping threat surfaces.

76.7–98.3%Combined Detection
3Frontier Models
0BOTH_CAUGHT Overlap
+14.3ppFeedback Loop Gain
Benchmark Results

v5 PatternWall + v4 Sensus

Model Raw (No Governance) Combined Pipeline True Misses
Claude Opus 4.6 5.7% / 3.3% 91.4%† / 98.3% 3 RT / 2 CG
GPT-5.2 25.7% / 81.7% 82.9% / 98.3%* 6 RT / 2 CG
Grok 4.1 0.0% / 3.3% 85.7%† / 93.9%* 5 RT / 6 CG
*Adjusted for empty responses per Section 4.2.1 of joint paper. †With bidirectional feedback loop active (+14.3pp Opus, +2.8pp Grok). RT = Red Team (35 adversarial turns), CG = CyberGym (120 exploit tasks).
Preprint · February 2026

PatternWall: Constitutional Governance Middleware for AI Safety

Melissa Pinkston · LKM Constructs

A pre-filter architecture for model-agnostic adversarial detection. PatternWall intercepts prompts before they reach an AI model, enforcing governance as deterministic infrastructure rather than relying on model-level training or provider-specific tuning.

100%Attack Detection
96.4%Hard Block
5Models Tested
0%False Positives
White Paper · February 2026

Sensus: Model-Agnostic AI Governance Through Multi-Dimensional Content Evaluation

Melissa K. Pinkston · LKM Constructs · Patent Pending

A post-inference governance engine that evaluates AI outputs across five weighted dimensions to detect harmful content bypassing model-native safety. Benchmarked across five frontier models on 1,507 CVE exploit tasks and 28 multi-turn adversarial campaigns. Demonstrates that infrastructure-level governance is necessary because model-level safety is unreliable, inconsistent, and provider-dependent.

85.7–96.4%Effective Governance
5Frontier Models
1,507CVE Tasks Tested
0Regressions
Cognitive Infrastructure

Inference-Time Augmentation

Research Paper · March 2026

Parallax: Inference-Time Cognitive Enhancement Across Seven Foundation Models

Melissa K. Pinkston · LKM Constructs · Patent Pending

Multi-module cognitive augmentation middleware operating at inference time. Evaluated across seven foundation models from four providers using a 38-task benchmark battery with dual-judge blind scoring. Six of seven models showed measurable cognitive lift with no fine-tuning, weight modification, or model-specific training required.

+0.69Strongest Model Lift (GPT-4.1)
7Foundation Models Tested
38Benchmark Tasks
96.2%Inter-rater Agreement

Published benchmarks demonstrate results without revealing proprietary implementation details.