Awesome Conferences

Not being attacked? Your network must be down.

Someone recently ask me how often an enterprise might expect to be attacked.

Attacks are no longer something that happens now and then, they are constant. An hour without an attack is an hour your network connection was down. This is sometimes known as the "Advanced Persistent Threat". Shortly after APT was declassified someone gave a lecture about it at Usenix LISA. You can watch it here. (Note: I found some of what he revealed to be disturbing).

I think the person meant how often an enterprise might expect a successful attack.

That's an entirely different matter.

Knowing about APT is one thing. What does it mean to you and me? To me it means that the following things are no longer "would be nice" but are required:

  1. virus scanners on all machines (even servers)
  2. virus scanners must automatically, silently, update. No user confirmation.
  3. a way to verify that virus scanners aren't disabled and/or flag any machines that haven't updated in X days.
  4. OS patches must be automated and, for critical security fixes, performed without user confirmation. (If you admin Mac OS X, try Munki)
  5. email filters (anti-virus, anti-spam) centralized; you can't trust each individual machine to filter on their own. Do it on the server or before it gets to your server.
  6. firewalls in front of external servers, not just in front of the enterprise

Lastly, the belief that "I won't be attacked, I don't have anything valuable" has to come to an end (and has been for a while). The fact that you have CPU to exploit or bandwidth to consume is valuable to attackers.

My 6-point list seems long but I bet it isn't long enough. What would you add?

Posted by Tom Limoncelli in Security

1 TrackBack

I recently pointed my "6-point list of security minimums" for the enterprise. That is, 6 thinks that may have been "would be nice" in the past but are now absolutely required as far as I'm concerned. Most sites do not... Read More

8 Comments

A few thoughts:

1) By "virus" I presume you mean "malware" in general.

2) On servers too? All flavors? Using "on access" scanning? Seems, for many, a high cost for low benefit.

3) A firewall in front of every server is also, IMHO, overkill; if properly configured only desired traffic will be allowed. Or perhaps you're referring to some sort of adaptive measure that can dynamically adjust what's allowed based on the current situation (ex: user X is allowed, however has done something to trigger a shutoff).

Suggestions of additional items:

- Configuration management/tracking. :)
- "Default deny" for *outgoing* traffic. (Sad, I know.)
- Monitoring of outgoing traffic patterns.
- Logging (intelligent, selective and secure).

We also need more centralization of protection mechanisms; we need to be able to use *all* the info together - and adjust individual mechanisms based on aggregate info.

I believe it should be a firewall period and things going out and in need to be kept track of. Pretty much every enterprise/home I have been ends up being a hard crunchy outside/soft creamy inside.

And because of the general complexity of systems these days assuming everything is configured correctly is the same as thinking that attacks only happen to other people. One must assume that things aren't properly configured and never can be and then design from there.

When using a bridging firewall this aint overkill. Each vlan its rules and protect your internal vlans against each others. Trust me that saved us a few times.

On the user aspect, managing auto patching/upgrades of apps is also becoming a must.

On a related note, GE uses FreeBSD since it's more 'unique' and it helps in making a more heterogenous environment, and a lot of what the Raytheon guy says (in the LISA talk) is basically the same as what's happening at GE:

http://www.youtube.com/watch?v=UM4ZrsOjmNQ

No one runs "day to day" as admin/root.

SELinux. Seriously.

People like to pooh-pooh it, but without SELinux (or something like it), root is still root, and a successful attacker has all the tools s/he needs to make your host-based deterrents useless. If an attacker has root, your antivirus, your host-based intrusion detection systems (e.g., Tripwire, AIDE), your configuration management and audit, even your logs are all worthless. What good is an antivirus scanner that can only be trusted if there aren't any viruses?

And I'm not talking about the bog-standard "application container"-style targeted SELinux implementation. In order to trust the vast array of host-based deterrents we've come to rely on, we need to see serious MLS policies implemented. And that's what's so scary to me: Everyone knows antivirus is necessary in at least some capacity, and could have an AV scanner up on any given box in no time. But even simple SELinux implementations are serious undertakings, and the sort of MLS that's needed to really protect yourself against an attack the complexity of Stuxnet isn't even on most sysadmins' radar. To too many admins "SELinux" is a punchline at best, but the sort of power separation it enforces is rapidly becoming necessary.

I think these are great points, however I would make note of a few points. With the attack on RSA, updated virus scanners, firewalls and updated patches are becoming irrelevant. Why? Well we know now that RSA was attacked by a zero day in combination with spear phishing. The weakest link in security will always be humans.

We must operate under the pretense that the perimeter has been compromised. Deployment of SIEM, log management, IDS/IPS is a start. Organizations should focus on incident response. Everyone gets hacked, it's how fast you can respond that makes the difference. Within the hour or within a year?

As Jimmy said, the user is the weak link. Awareness and education are important. If users can be trained to be more suspicious then spear phishing and similar attacks are less worrisome. I know that sounds naive, but having dealt with successful spear phishing incidents, it is a lot better to keep the bad guys out even when you have lots of defense-in-depth technical controls in place.

As for antivirus/malware detection. Scan engines based on signatures only catch a small percentage of malware. Using multiple products helps catch more variants.

Lastly, make sure clueful humans are monitoring the plethora of alerts from all of these detection systems (and have some way of separating the noise otherwise people stop paying attention).

Credits