Log files - are you reviewing yours?

Published: 2011-06-20. Last Updated: 2011-06-20 00:49:59 UTC
by Chris Mohan (Version: 1)
8 comment(s)

The media is full of security horror stories of company after company being breached by attackers, but very little information is actual forthcoming on the real details.

As an incident responder I attempt to understand what occurred and learn from these attacks, so I'm always looking for factual details of what actually happened, rather than conjecture, hearsay or pure guess work.

Back in April Barracuda Networks, a security solution provider, got compromised and lost names and email addresses. They disclose the breach then took the admirable step of publishing how the breach took place, with screen shots of logs, and their lessons learnt from the attack [1].

I hope that those who unfortunate to suffer future breaches are equally generous enough to share their logs and lessons learnt for the rest of us to understand and adapt for our own systems. The attackers share their tips and tricks, as anyone looking at the uploaded chat logs to public sites like pastebin can attest to this. We need the very smart folks looking after the security at theses attacked companies, that can step up, to take time to write up what really happened is going to make it accessible for the rest of us to learn from.

Seeing the events of an attack in recorded in log files is a terrible, yet beautiful thing. To me it means we, as defenders, did one thing right since detection is always a must. If the attack couldn't or wasn't blocked, then being able to replay how a system was compromised is the only way forward to stopping it from occurring again. 

Logs review should be a intrinsic routine performed by everyone, daily if possible. Whether it be a visual, line by line review* or by using grep, a simple batch script or a state-of-the-art security information and event management system to parse the logs in to an easy to read and digest format for even a novice IT person to review and understand. This should be part of the working day process for all levels of support and security staff; drinking that morning coffee while flicking through the highlights of systems should be part of the job description.

Log files need to easy to understand and get information from. As someone who works with huge Windows IIS logs files, automation is your friend here. Jason Fossen's Search_Text_Log.vbs script [2] is a great starting point for scripters or for a more dynamic analysis tool Microsoft's log parser [3] is well worth taking the time to get to grips with. As an example of some of the information you can extract from IIS logs have a read here [4] see how easy it is to pull pertinent data and this blog piece [5] has a excellent way to get visual trending IIS data.

If log analysis isn't something you do much of, then a marvellous way to get some practice in is from this Honeynet.org challenge [6]

It's important to note logging has to be enabled on your systems, set up and reviewed to produce useful information. Multiple logging sources have to be using the same time source, to make correlation easy, so take the time to make sure your environment is configured and logging correctly before you need to review the logs for an incident.

As always, if you have any suggestions, insights or tips please feel free to comment.

[1] http://blog.barracuda.com/pmblog/index.php/2011/04/26/anatomy-of-a-sql-injection-attack/

[2] http://www.isascripts.org/scripts.zip

[3] Download log parser from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en

[4] http://blogs.iis.net/carlosag/archive/2010/03/25/analyze-your-iis-log-files-favorite-log-parser-queries.aspx

[5] http://blogs.msdn.com/b/mmcintyr/archive/2009/07/20/analyzing-iis-log-files-using-log-parser-part-1.aspx

[6] http://www.honeynet.org/challenges/2010_5_log_mysteries


* for you own time management, eyesight and frankly sanity try to avoid this.

Chris Mohan --- Internet Storm Center Handler on Duty

Keywords: logs
8 comment(s)

Comments

The key mistake made at Barracuda is someone switched their web app firewall into non-blocking mode during maintenance and never switched it back to blocking mode. Based on their experience, we added an alert to ours to email several people when ours switches mode.

Maybe we could begin compiling some tips as to WHAT to look for in logs, not just "do you look at your logs?" Here are a few to start with:

1. Know what traffic is permitted INTO your network. Set a firewall filter to display all accepted traffic from non-approved sources. This can alert you to someone changing a rule and not letting you know to a rule having unintended side effects.

2. Have a very restrictive outbound rule and then monitor all traffic trying to exit the network and getting dropped. This usually will be some misconfigured Windows system but occasionally can be a malware-infected system.

3. Make sure all of your systems sync to an internal time server. Then restrict access to Internet time servers. Monitor for attempts to sync time via the Internet from unknown sources. We've detected unauthorized consumer wireless access points this way. Those home routers usually try to time sync to the Internet, so that can be an indirect way of detecting them. More likely it's another misconfigured Windows system, though.
I'd just like to mention that log file monitoring doesn't have to cost an arm and a leg. One excellent freebie is OSSEC (http://www.ossec.net/), which monitors a large variety of logs and can be configured to report on "interesting" things in a GUI interface or via email.
I would recommend a Log Correlation application like LCE by Tenable. This application along with SecurityCenter provide highly configurable alerting and log monitoring across multiple platforms. www.tenable.com

I would hesitate to recommend Tenable's Seucirty Center. For an enterprise of any size and/or complexity it is a very costly solution. Further, I know of a particular government deployment that is experiencing quite a bit of trouble conducting routine vulnerability scanning; granted, with a less than desireable deployment architecture - but the Security Center product is not living up to the hype.
one word: Splunk

There's a "free" limited version too.
The problem is a lot of support people have too much on their plates. Being a support person who handles the back end and help desk there is zero time to deal with security If you have 20 servers, 6 satellite offices and a long help desk queue. Plus an administratition that does not understand technology other than the expense. I am sure I am not alone. I also suspect this is the reason many organizations get hacked.
@Overwhelmed - I know the feeling, the key is automation/scripting of log checking. Although you'll initially be trawling through screeds of legitimate logs, with a small amount of filtering it's fairly easy to get it down to the point where only significant logs are getting through to you. It's very easy to fall into the trap of filtering too broadly though. In the grand scheme of things 20 servers don't really generate that many logs.
It's time well spent - after "expense", management also understands words like "breach" and "compromised" fairly well and if you get put on the spot after an incident you'll feel like a complete idiot if you try to tell them you didn't have time to implement some rudimentary log checking, and you'll be a lot better armed to come back and say well, we did everything in our power right and we still got owned, but here's what we can do better next time.
Hi, If anyone is looking for a place to grab Jason Fossen's isascripts.org/scripts.zip file, a copy of this zip file has been posted up at this site with permission:

http://www.jigsolving.com/jigsovling/lost-vb-scripts-jason-fossens-isacripts-org-script-zip-file-can-be-found-here

Cheers

Diary Archives