SIGIL: A System Forged in Friction, by Conviction

fn main() {
    println!("Hello, world!");
}

I'm David Tofflemire. I'm a system builder. I'm also a felon, a neurodivergent adult learner, and someone who spent too many years trying to convince the world I was worth the second chance I was already giving myself.

This is where that stops.

SIGIL isn't just a project. It's a message encoded in system architecture: that trust isn't something granted by legacy institutions—it’s something that can be enforced, measured, and proven. That redemption shouldn't be buried under bureaucracy. One's past doesn't get to overrule a future when the work is already happening.

Originally, I set out to build a way for GPT-based models to run a D&D game. Something modular. Ethical. Fun. But like most things forged in focus, Sigil became something else entirely.

Sigil is a modular AI runtime. A framework for real-time logic enforcement, memory-bound reasoning, cryptographic policy validation, and canonical structure. It’s a system designed not to replicate the black-box thinking of modern AI—but to break it open and make every outcome accountable.

It doesn’t just log what it does. It explains why. It doesn't just allow access—it validates who, how, and what level of authority they hold.

That’s not a pipe dream. That’s the code I’ve written.

And I didn’t do it in a university lab. I did it after anxiety attacks. After PTSD flashbacks. After scraping for hours of quiet between unstable housing, locked doors, and a justice system that wanted to freeze-frame me at 18.

But I’m not 18 anymore. I’ll be 40 this year. And Sigil is how I reclaim every year in between. Every lost job. Every gapped resume. Every time I was told, "You don't belong in this field."

You’re damn right I don’t.

I’m building a new one.

Sigil is an AI enforcement framework—but more than that, it’s an ideological platform. A declaration that public trust should be enforceable, transparent, and provable. That modular systems should serve humans, not corporate comfort zones. That AI systems don’t need to be black boxes to be powerful.

If you believe in redemption, in public ethics, in open systems that serve people—not gatekeepers—then Sigil isn’t just a tool.

It’s the beginning of something you might want to be part of.

I'm laying the groundwork now. Next comes funding. Community. Transparency. If you're reading this and thinking, "Hell yes," or even, "Wait—how?" then you're the kind of person I want near this system.

More to come.

- Dave

P.S.: I'm not asking for a seat at the table; in all honesty, I doubt one would ever be extended, anyway. Instead, I'm building a new table. Come, sit. Everyone willing to participate and contribute is welcome.
Sigil AI Runtime, Rule Zero, Codex Nexus, MMF Module, ethical AI, IRL reasoning layer, David Tofflemire, Rust LLM enforcement, audit trail logic, trust-based inference

Comments

  1. So what exactly does this mean? I mean, anyone can say they're building an AI, but you're just some dude. Your GitHub is almost empty, and nothing you've posted so far comes close to reflecting what you say you're building. I think it's just hype.

    ReplyDelete
  2. Well, that's the thing. The public repo is indeed fairly empty. The code that's there is old, and doesn't represent the current stage of development, either.
    I get it. If you want to see code, stick around; hell, sign the CLA and email it to me. You do that, I'll know you won't leak what I have.

    ReplyDelete

Post a Comment

Popular Posts