Tip of the Day : snort rule management

Published: 2006-08-11. Last Updated: 2006-08-11 19:35:03 UTC
by Adrien de Beaupre (Version: 1)
0 comment(s)

Tip1
We maintained a central CVS repository where each analyst had an account.  The repository contained the snort configurations for each sensor (different subdirectories) and snort rules from sourcefire in addition to tuned rules, custom local rules, and some 3rd party rules.  I wrote some python scripts to filter out "good" bleeding-snort rules for example.

every N hours, each snort sensor would update its rules and configs from CVS and reload itself

on a daily basis a cron job would pull down the latest rules from sourcefire, do a diff of what changed and email that diff to all the analysts.  It would then automatically add the new changes to a branch in CVS that would be merged in 24 hours unless an analyst who had seen the diff made changes otherwise.

any time a rules change was committed, the CVS server would run the config files and rules through snort -T to validate the syntax and would reject the commit if it failed validation, so the CVS repository always at least had valid configuration files in it.

whenever an analyst committed a change to anything in CVS, a diff was taken and emailled to all the other analysts letting them know what happened.

If a sensor ever blew up, replacing it was trivial, as was reverting the rules or config back to an earlier configuration thanks to CVS and additionally, all changes were tracked to who did what when, so troubleshooting problems became easier as well.

Tip2

For updating and managing Snort rules use Oinkmaster (http://oinkmaster.sourceforge.net/).

However, when it comes to implementing rules, don't just assume the rules are going to be perfect and without flaws. The process I use is:

1. Check if there are any new rules and notify me but don't install them.
2. After reviewing the rules, install the rules.
3. Run a taint check against the rules. If there is a problem, revert back to the old set (you did make a backup, right?) and notify the rule author.
4. Activate the new rules and monitor for false positives.
5. If false positives are found then report them to the rule author and help, if possible, with testing the corrected rules.

- KenM

Tip3

I work for a major (healthcare organization), and we have multiple snort boxes deployed at multiple aggregate points within the network.  The architecture follows a standard snort deployment with multiple sensors sending alert data via mysql to a mysql database, and then there is an IDS correlation web application front ending the db to view event data etc.  As the IDS correlation web application has the ability to manage snort rules, the functionality did not meet our technical needs.  As a solution, we designated two snort sensors to serve as the rules management systems using oinkmaster.  One system is positioned on our link out to the Internet, while the other is at another aggregate point.  These two systems are fully redundant in respect to the oinkmaster configuration for pulling down rules, however, the sensor located on link out has a different rules directory because this is the only link we see traffic heading out to the Internet, and to avoid the same alerts in the IDS console, the HOME_NET to EXTERNAL_NET is only useful at this location.  The secondary sensor does the opposite and triggers on rules not heading out to the public Internet; HOME_NET to HOME_NET etc.

The snort box on link-out is configured to automatically poll updates from bleeding-snort and snort.org using oinkmaster.  As these rules are downloaded and installed, I receive an email on the rules added or if there were any modifications to the existing rules.  Once this is complete, I have a script that syncs the /rules directory to all other snort sensors and restarts the snortd engine. This uses a PKI architecture to automate the login to each remote snort sensor.  Also, the script is intelligent enough to sync other important snort files, such as snort.conf, and other configuration files.  In regards to local rules, we administer these rules only on the link-out snort sensor, and the secondary master sensor has the same script to sync local rules to the remaining snort sensors.

In regards to snort.conf, we define all variables, such as HOME_NET, EXTERNAL_NET, DNS, etc? as this is crucial to mitigate false positives.  Also, if we create local rules that need added variables to make it easier to group ports or IP addresses, we create new variables in snort.conf so each rule can cross reference the variables.

- BenP.

Tip4

Having already extended neck for the chopping block and been smacked accordingly ;-)...I use the following to do quick changes and checks to my Snort installs on CentOS 4.3.
Ultimately, it's purely a convenience factor to type single word commands so, in my path, I keep the following little scripts, chmod a+x applied.

For Bleeding-Edge rules, I prefer the single bleeding-all.rules so I use this to update it rather than Oinkmaster:

#bleedingpig
cd /etc/snort/rules/
rm -f bleeding-all.rules
wget http://www.bleedingsnort.com/bleeding-all.rules
-----------------------
To fire Oinkmaster manually rather than cron:
#oink
oinkmaster.pl -C /etc/oinkmaster.conf -C /etc/autodisable.conf -o /etc/snort/rules
-----------------------
To kill the daemon:
#killpig
killall snort
-----------------------
To confirm Snort process state:
#pigps
ps aux | grep snort
-----------------------
To confirm Snort running cleanly after config or rule changes:
#pigchk
/usr/local/bin/snort -c /etc/snort/snort.conf -i eth1 -v
-----------------------
To start the daemon:
#pigd
/usr/local/bin/snort -c /etc/snort/snort.conf -i eth1 -g snort -D

- RussM

Cheers,
Adrien
Keywords: ToD
0 comment(s)

Comments


Diary Archives