Security monitoring for Linux servers via AI
A tool that lets you run security audits on your Linux server just by asking. You talk to Claude, Claude talks to the tool, and you get a full security report in seconds.
MCP (Model Context Protocol) is a way for AI assistants like Claude to use external tools. Think of it like giving Claude hands: instead of just answering questions, it can actually do things on your system.
With MCP, you install a "server" (a small program) on your machine. Claude connects to it and can call specific functions. In this case: security checks on your Linux server.
You ask Claude "run a security audit", Claude calls the tool, the tool scans your server, and you get the results. All through a normal conversation.
Checks firewall, SSH config, fail2ban, Docker security, kernel hardening, SSL certificates, and more. Returns a score with actionable recommendations.
Scans open ports, wildcard bindings, exposed services. Can check both local configuration and external attack surface.
Auto-detects tech stacks (Node.js, Python, Go, Rust, PHP...) and checks for dependency vulnerabilities, hardcoded secrets, insecure configs.
Background daemon that watches for changes: new ports, firewall modifications, SSH config changes, attack spikes. Alerts when anomalies are detected.
Checks MySQL, PostgreSQL, MongoDB, Redis for weak passwords, remote access, dangerous configurations, missing authentication.
Cross-references CVEs against CISA KEV and NVD databases. Identifies which vulnerabilities are actively exploited in the wild.
When you ask Claude to run an audit, you get clean, readable output. Here's what it looks like:
Claude also provides context and recommendations based on the results. You can ask follow-up questions, request deeper scans on specific areas, or start continuous monitoring.
The first version was written in Python as a proof of concept. It worked, but wasn't production-ready.
The current version is written in Go. Single binary, no dependencies, fast startup. Most of it was built using spec-driven development: I wrote detailed specifications, then used AI assistance to implement them. Some parts were written by me directly, others by collaborators who joined the project.
The spec-driven approach worked well for this kind of tool: clear inputs, clear outputs, well-defined security checks. The specs became both documentation and implementation guide.
This is an open project and there's room to grow. The core audit features work well, but there's more to build: better anomaly detection, more database checks, container security, cloud integrations.
Looking for contributors who care about security tooling and want to build something useful. Whether it's code, testing, documentation, or ideas - all contributions are welcome.
The codebase is clean, well-documented, and easy to extend. If you're interested in security, Go, or AI tooling, check out the repo.
View on GitHub →