Recently in Security Category

Tavis Ormandy, Google security expert, is getting press for criticizing Meaningless Antivirus Excellence Awards. This is a good opportunity to mention some thoughts I've had about anti-malware software.

I believe that enterprise security defense software (anti-virus, anti-malware, host-based firewall, etc.) should have these qualities:

  • Silent Updating: The software should update silently. It does not need to pop up a window to ask if the new antivirus blacklist should be downloaded and installed. That decision is made by system administrators centrally, not by the user.
  • Hidden from view: The user should be able to determine that the software is activated, but it doesn't need an animated spinning status ball, nor popup windows to announce that updates were done. Such gimmicks slow down the machine and annoy users.
  • Negligible performance impact: Anti-malware software can have a significant impact on the performance of the machine. Examining every block read from disk, or snooping every network packet received, can use a lot of CPU, RAM, and other resources. When selecting anti-malware software, benchmark various products to determine the resource impact.
  • Centralized Control: Security defense software should be configured and controlled from a central point. The user may be able to make some adjustments but not disable the software.
  • Centralized Reporting: There should be a central dashboard that reports the status of all machines. This might include what viruses have been detected, when the last time the machine received its antivirus policy update, and so on. Knowing when the machine last checked in is key to knowing if the software was disabled.

Obviously "consumer" product can drop the last two requirements.

However "consumer" products also tend to violate the other items too! "Consumer" anti-malware products tend to be flashy and noisy. Why is this?

I have a theory.

Anti-malware software sold to the consumer needs to be visible enough so that the user feels like they're getting their money worth. Imagine if the product ran silently, protecting the user flawlessly, only popping up once a year to ask the user to renew for the low, low, price of $39.99? The company would go out of business. Nobody would renew as it appears to have done nothing for the last 12 months.

Profitable anti-malware companies make sure their customers are constantly reminded that they're being protected. Each week their software pops up to say there is a new update downloaded and asks them to click ``protect me'' to activate it. Firewall products constantly asks them to acknowledge that a network attack has been defended, even if it was just an errant ping packet. Yes, the animated desktop tray icon consumes CPU bandwidth and drains laptop batteries but that spinning 3D ball reassures the user and validates their purchase decision.

Would it have been less programming to simply do the update, drop the harmful packet, and not display any popups? Certainly. But it would have reduced brand recognition.

All of this works because there is an information deficit. Bruce Schneier's blog post, ``A Security Marketplace for Lemons'' explains that a typical consumer can not judge the quality of a complex product such as security software, therefore they are vulnerable to these shenanigans.

However you are a system administrator with technical knowledge and experience. It is your job to evaluate the software and determine what works best for your organization. You know that it should rapidly update itself and be as unobtrusive to the users as possible. Whether or not you renew the software will be based on the actionable data made visible by the dashboard, not due to the excitement generated by spinning animations.

Posted by Tom Limoncelli in Security

*Lately there has been a renewed debate over the use of encrypted communication. Terrorists could be using encryption to hide their communication. Everyone knows this. The problem is that encryption is required for ecommerce and just about everything on the web.

Should encryption be banned? regulated? controlled?

Lately there have been a number of proposals, good and bad, for how to deal with this. Luckily I have a solution that solves all the problems!

My solution: (which is obvious and solves all problems)

change_your_password.pngMy solution is quite simple: Every time a website asks you to create or change a password, it would send a copy to the government. The government would protect this password database from bad people and promise only to use it when they really really really really really need to. Everyone can still use encryption, but if law enforcement needs access to our data, they can access it.


I've received a number of questions about my proposal. Here are my replies:

Q: Tom, which government?

Duh. THE government.

Q: Tom, what about websites outside the U.S.?

Ha! Silly boy. The internet doesn't exist outside of the U.S. Does it? Ok, I guess we need a plan in case countries figure out how to make webpages.

For example if someone in Geneva had the nerve to create a website, they'd just turn the passwords over to their government who would have an arrangement with the U.S. government to share passwords. This would work because all governments agree about what constitutes "terrorism", "due process", and "jurisprudence". Alternatively these Genevaians could just turn the passwords over to the U.S. directly. They trust us. Right?

Q: Tom, what if the government misuses these passwords?

That won't happen and let me explain why: There would be a policy that forbids that kind of thing.

If they have a written policy that employees may not view the passwords or use them inappropriately, it won't happen. I believe this because in past few years I've seen CEOs make statements like that and I always trust CEOs. I believe in capitalism because I'm no dirty commie hippy like yourself.

Q: Tom, how do we define when the government can use the database?

3348566.jpgDude. What part of "really really really really really" didn't you understand? They can't just really really really really need to use one of those passwords. They have to really really really really really (5 reallys!) need to use it!

Q: Tom, what if someone steals the government's database?

Look, the government has top, top, people that could protect the database. It would be as simple as protecting the codes that launch nuclear missles.

Q: Tom, doesn't the OPM database leak prove this is unworkable?

What? Why would the government name a database after one of the best Danny Devito movies ever? Look, that movie was fictional. If you aren't going to take this debate seriously, stay out of it. Ok?

Q: Tom, wouldn't this encourage terrorists to make their own online systems?

Dude, you aren't paying attention. They'd be required to turn their passwords over to the government just like everyone else! If they don't, we know they are terrorists!


Hi. Thank you for reading this far.

Obviously the above proposal is not something I support. It is a analogy to help you understand that the FBI and other law enforcement organizations are proposing. When you hear about "law enforcement backdoor" legislation or requiring that phones be "court unlockable" this is what they mean.

The proposed plans aren't about passwords but "encryption keys". Encryption keys are "the technology behind the technology" that enables passwords to be transmitted across the internet securely. If you have a company's encryption keys you can, essentially, wiretap the company and decode all their private communication.

Under the proposal, every device would have a password (or key) that could be used to gain access to the encryption keys. The government would promise not to use the password (key) unless they had a warrant. We'd just have to hope that nobody steals their list of passwords.

Obviously neither of these proposals are workable.

This debate is not new. 20 years ago FBI and NSA officials went to the IETF meetings (where the Internet protocols are ratified) and proposed these ideas. In 1993-1995 this debate was huge and nearly tore the IETF apart. Finally cooler heads prevailed and rejected the proposals. It turned out that the FBI's predictions were just scare tactics. None of their dire predictions came true. "Indeed, in 1992, the FBI's Advanced Telephony Unit warned that within three years Title III wiretaps would be useless: no more than 40% would be intelligible and in the worst case all might be rendered useless. The world did not "go dark." On the contrary, law enforcement has much better and more effective surveillance capabilities now than it did then." (citation)

We must reject these proposals just like we did in the early 1990s. Back then most American's didn't even know what "the internet" was. The proposals were rejected in the 1990s because of a few dedicated computer scientists. Today the call to reject these proposals should come from everyone: Sysadmins, moms and dads, old and young, regardless of political party or affiliation.

All the encryption lingo is overwhelmingly confusing and technical. Just remember that when you hear these proposals, all they're really saying is: The FBI/NSA want easy access to anything behind your password.

Warning! Upgrade now! There is a security hole in the git client.

UNTIL YOU UPGRADE: Do not "git clone" or "git pull" from untrusted sources.

AFTER YOU UPGRADE: Do not "git clone" or "git pull" from untrusted sources. THE CODE YOU JUST DOWNLOADED IS UNTRUSTED AND SHOULD NOT BE RUN, YOU FOOL!

Posted by Tom Limoncelli in Security

I'm really sick and tired of Slashdot doing posts like this, but it isn't slashdots fault. It's our industry's fault.

Here's the question:

"I am a senior engineer and software architect at a fortune 500 company and manage a brand (website + mobile apps) that is a household name for anyone with kids. This year we migrated to a new technology platform including server hosting and application framework. I was brought in towards the end of the migration and overall it's been a smooth transition from the users' perspective. However it's a security nightmare for sysadmins (which is all outsourced) and a ripe target for any hacker with minimal skills. We do weekly and oftentimes daily releases that contain and build upon the same security vulnerabilities. Frequently I do not have control over the code that is deployed; it's simply given to my team by the marketing department. I inform my direct manager and colleagues about security issues before they are deployed and the response is always, 'we need to meet deadlines, we can fix security issues at a later point.' I'm at a loss at what I should do. Should I go over my manager's head and inform her boss? Approach legal and tell them about our many violations of COPPA? Should I refuse to deploy code until these issues are fixed? Should I look for a new job? What would you do in my situation?"

I guess I'm getting a bit passive-aggressive in my old age because here's my reaction:

Well it sounds like you've done the responsible thing and tried to raise the issue. That's important because that is what I'd recommend. You need to make at least 3 attempts at warning your employer before you give up. Each time make sure you explain it in terms of the business impact, not in geeky technical jargon. In other words, "The system could be penetrated and credit card numbers could be stolen" is business impact. "There's a buffer overflow in the PHP framework being used" is geeky technical jargon. Explain it without sounding alarmist, but be firm. File a bug for each issue so that there is visibility and a record of when the issue was first raised.

However it seems like you've already done that. Your question isn't "what should I do?" but "what should I do now that warnings have failed?"

I guess I'm getting a bit passive-aggressive but the answer is, "Let them fail."

You've done your job. You have a technical position and your role is to raise technical issues. You aren't in management. Management's job is to set priorities. They've set the priorities: security is low priority.

Management won't give security any higher priority until a few "household names for anyone with kids" have catastrophic outages and security issues that are "New York Times Front Page" stories. For some executives the only motivation is fear of public embarrassment.

Once that happens management will finally take action.

What action? If they are smart they'll change their technical strategy and fix the problems: put into place controls and procedures to fix problems before they happen. In that case, good for them. If they are dumb they'll hire a slick "snake-oil salesperson" that will charge a lot of money but not actually improve things. In that case, they'll go out of business (or the executives will be fired) when there are more problems or a company better at technology is more successful.

Isn't it about time that dumb companies go out of business? Isn't it better for dumb executives to get fired?

Yes, it is.

So why are you helping them stay in business? Why are you sheltering a dumb person from the effects of their ignorance?

Does any company think they are unaffected by the "computerization-of-everything" that they can hire technologically illiterate executives?

When AT&T Wireless went out of business and sold their name (and their customer list) to SBC, didn't it improve the world?

Of course, the most ethical thing to do would be to educate them and help them change their ways. However that was not your question. Your question was "what now that I have failed?"

Oh, that reminds me. One of the most important parts of working in IT is being able to communicate effectively to executives the business impact of these things. My definition of "effective" is that they decide to make the changes required to fix the problems you are concerned about.

Failure of communication is a two-way street. The information sender has to succeed and the information receiver has to succeed. If either fails, both fail.

So that's the real bad news here. You are just as much a failure in communicating to them as they are a failure in receiving the information.

So, if you have failed, doesn't that mean we need to get you out of there for the same reason we need to get failed executives out of companies (or failing companies out of the market)?

Yeah. That.

So if you leave maybe your replacement will be better at "one of the most important parts of working in IT": communication. Or maybe you can step back and completely change your tactics.

If you are going to leave, read my So your management fails at IT, huh? blog post. It will help you feel better and leaving such a messed up company.

If you are going to change your ways, let me recommend The Phoenix Project. It will open your eyes to a an entirely different way of communicating and interacting with executive management about IT.

I hope you pick the latter. It is probably the better thing to do for your sanity, your stress level, and your career.

Posted by Tom Limoncelli in RantsSecurity

...that I got caught in a "spear phishing attack". (A malware attack where they send an email specifically crafted to one or two people.) The email was a receipt from a hotel that I stay at occasionally but it listed the address as being in South Carolina instead of San Francisco. I clicked on the PDF to read it and then realized I was being phished because I haven't been to South Carolina in ages and the invoice mentioned a coworker that I've never traveled with. I started shutting down my computer and made plans to wipe the disks; glad I have good backups but not wanting to go through the pain of being without my laptop until I could do this.

That's when I woke up.

Yes, it was a dream.

I have friend that only click on web links if they are on a ChromeOS machine. The use many machines but if they get a link that is outside their domain they move it to a ChromeOS box to click on it. That's an interesting discipline to develop. I wonder how soon more people will do that.

It used to be there was a small group of people that were extremely paranoid about giving out their social security number or credit card numbers. At the time people called them "paranoid". Now there is this thing called "identify theft" and those people are considered to be "forward thinkers".

I wonder what paranoid behavior today will be normal in the future.

I'll be speaking at LOPSA-New Jersey on Thursday. This will be a repeat of the keynote I did in North Carolina last November. While it says "security" in the title, it will make sense whether you work in security or not. All are invited! (no charge to attend)

Topic: You Suck At Time Management (but it isn't your fault!) Date: Thursday, January 5 2012 Time: 7:00pm (social), 7:30pm (presentation)

Pizza and Soda being brought to you by: INetU Managed Hosting

If you are planing on coming please RSVP so we have a good count for the Pizza.

Location: Lawrence Headquarters Branch of the Mercer County Library
2751 US Highway 1
Lawrenceville, 08648-4132

So much to do! So little time! Security people are pulled in so many directions it is impressive anything gets done at all. The bad news is that if you work in security then good time management is basically impossible. The good news is that it isn't your fault. Tom will explore many of the causes and will offer solutions based from his book, "Time Management for System Administrators" (Now translated into 5 languages.)


Mac Malware

Some people laughed when I tweeted but now look at this just 8 days later!

This might be a good time to relink to my post called Yes, malware scanners on your servers too!

Posted by Tom Limoncelli in Security

I recently posted my "6-point list of security minimums" for the enterprise. That is, 6 things that may have been "would be nice" in the past but are now absolutely required as far as I'm concerned. Most sites do not do all 6, and I think it is time that such sites got with the program 'cause you are making the rest of us look bad.

I got a number of comments asking if I was serious about malware scanners on all computers.... did I really mean servers too?


If the machine is a file server then the files being stored should be scanned. It prevents this server from being the unintentional transmitter of infected files. [As a bonus it is an interesting way to detect which users are not protecting themselves. Notice that a large fraction of the infected files are in a certain person's network home directory? Yeah, better check to see if they've disabled their malware detector.]

Web servers, email servers, and "shell servers" all have the same issue. One of my personal servers is a FreeBSD box that I permit friends to have shell accounts. I recently ran a popular commercial virus scanner on all the files. I found 3 files with viruses: One in my home directory (I had done a backup of a laptop to the server ages ago). Two in the home directory of users that were just as shocked as me. Fixing those infected files prevented those users from passing the malware along.

Server performance won't suffer very much. Modern malware scanners are much better behaved. Operating systems have more efficient hooks to let them do their work. Plus, the better vendors are trying to be the lightest burden on system resources. Competition will help that.

I also run a malware scanner on my personal Mac even though Macs are known to not have a lot of virus problems... at this time. Most of the infected files it has detected are Windows virii which wouldn't harm me but this is worth it because it means I haven't propagated the file to my Windows-using friends.

The problem, however, is that in this era of APT it has become very common to find malware written specifically to seek out a particular person or company. The anti-malware vendors are less likely to discover such junk, and if they do there isn't as much financial incentive for them to publish signatures for such things. However, doing this kind of scanning is still important just like people with strong teeth should still brush their teeth. It is good hygiene.

There are still threats from XSS, weak passwords, social engineering and so on and so on. However not doing these 6 basic things is irresponsible bordering on professional negligence.

Posted by Tom Limoncelli in Security

Someone recently ask me how often an enterprise might expect to be attacked.

Attacks are no longer something that happens now and then, they are constant. An hour without an attack is an hour your network connection was down. This is sometimes known as the "Advanced Persistent Threat". Shortly after APT was declassified someone gave a lecture about it at Usenix LISA. You can watch it here. (Note: I found some of what he revealed to be disturbing).

I think the person meant how often an enterprise might expect a successful attack.

That's an entirely different matter.

Knowing about APT is one thing. What does it mean to you and me? To me it means that the following things are no longer "would be nice" but are required:

  1. virus scanners on all machines (even servers)
  2. virus scanners must automatically, silently, update. No user confirmation.
  3. a way to verify that virus scanners aren't disabled and/or flag any machines that haven't updated in X days.
  4. OS patches must be automated and, for critical security fixes, performed without user confirmation. (If you admin Mac OS X, try Munki)
  5. email filters (anti-virus, anti-spam) centralized; you can't trust each individual machine to filter on their own. Do it on the server or before it gets to your server.
  6. firewalls in front of external servers, not just in front of the enterprise

Lastly, the belief that "I won't be attacked, I don't have anything valuable" has to come to an end (and has been for a while). The fact that you have CPU to exploit or bandwidth to consume is valuable to attackers.

My 6-point list seems long but I bet it isn't long enough. What would you add?

Posted by Tom Limoncelli in Security

My apologies for flogging my employer's product, but I enough people have asked me "how can I protect my gmail account" that I feel this is worth it.

Google has enabled 2-factor authentication for GMail. I highly recommend you enable this. Attacks on gmail accounts (and all accounts) are increasing in frequency.

Posted by Tom Limoncelli in Security

  • LISA16