AI Systems
When AI Systems “Dream”: A Failure of Architecture, Not Models
Many AI failures labeled as hallucinations are actually coherent systems operating without sufficient grounding or constraint. The fix is architectural, not model-based.
Agents Are Actors With Intent, Not Guarantees
AI agents are not a new category of system. They are actors with intent operating inside existing architectures—reintroducing familiar distributed systems constraints under probabilistic behavior.
Ghost in the Machine: Adversarial Priors in AI Systems
Large language models learn the statistical structure of the text they are trained on. Because written language overrepresents conflict, persuasion, and strategic reasoning, the prior embedded in modern AI systems is not neutral.