We build open source infrastructure for securing AI agents and the software supply chain. From kernel-level sandboxes to cryptographic provenance.
OS-level sandbox for AI agents. Uses kernel-level enforcement (Seatbelt on macOS, Landlock on Linux) to provide default-deny file, network, and process access control. No escape mechanism -- structurally impossible unauthorized operations. Agent-agnostic: works with Claude Code, OpenCode, Cursor, Aider, or any CLI tool.
brew install nonoOpen-source project for securing the software supply chain through cryptographic signing, verification, and transparency. Enables keyless signing using short-lived certificates and maintains public transparency logs. Now being extended to AI agent provenance via sigstore-a2a for agent-to-agent communication.
Specialised dataset generation and model fine-tuning framework designed for training small language models (SLMs) to become capable agents. Combines reasoning traces with tool calling patterns and structured outputs for efficient multi-step workflows.
Prototype for secure and verifiable interactions between AI agents. Establishes time-bound capability-based access control with decentralized identity and verifiable credentials for cryptographically auditable autonomous agent operations.
Python security analysis tool that identifies common security vulnerabilities in Python codebases through static analysis. Examines source code for security issues and integrates into CI/CD pipelines.
We believe in the power of open source. Check out our GitHub organization to see all our projects and find opportunities to contribute.
Visit our GitHub