Awesome Conferences

LISA papers about robots that correct broken configuration files.

Ok, not actual robots but hear me out. I'm sitting in a session at LISA 2010 right now (Wednesday, 2pm Papers session) where all 3 papers are about systems that analyse some kind of configuration file and, given some tests, can find a problem and fix it. The three papers cover (in order): router configuration, Role Based Access Control database, and firewall rule set.

The third paper (which is the one about firewall rules, and happens to have won "Best Student Paper") they have a number of ways to manipulate rules in an effort to fix a broken configuration: swap two rules, delete a rule, change an IP addr in a rule, etc. They've invented a novel way to search all the possible combinations of these things so as to quickly find their way to a modified version of the ruleset that works. Pretty cool.

But wait, how do you know if a ruleset "works"? Well, you generate a bunch of tests and run them through the system. If they all pass, the ruleset "works". A test might be "a packet with src address, port 80 should NOT get to port ANY". You can have a human write them. But they went further. They invented an algorithm to generate a nearly-optimal number of tests patterns (for example, if you have a /24, you can deduce cases where you don't need to test all 256 addresses in the /24). A human needs to say whether these tests should permit or reject the packet, but at least the generation is automated.

And how do they test this? They invented an algorithm to efficiently apply all the tests to a ruleset at the same time. The ruleset is turned into a decision tree, and the tests can now be processes very quickly.

You'll have to read the paper for all the details.


But the point of this blog post is not to explain these papers in detail. What I really want to say is that I find it very interesting that people are applying AI techniques to finding and fixing configuration problems. 3 papers in one conference? It could be a trend.

And lastly, if we can write a lot of "unit tests" and automatically fix a firewall ruleset so that it comes into compliance with them, then my questions: at what point will we just write a heck of a lot of tests and let some AI robot write out firewall rules from scratch?

I, for one, welcome our new AI firewall-rule-writing overlords!

Posted by Tom Limoncelli in Conferences

No TrackBacks

TrackBack URL:

Leave a comment