Log files - are you reviewing yours?
The media is full of security horror stories of company after company being breached by attackers, but very little information is actual forthcoming on the real details.
As an incident responder I attempt to understand what occurred and learn from these attacks, so I'm always looking for factual details of what actually happened, rather than conjecture, hearsay or pure guess work.
Back in April Barracuda Networks, a security solution provider, got compromised and lost names and email addresses. They disclose the breach then took the admirable step of publishing how the breach took place, with screen shots of logs, and their lessons learnt from the attack [1].
I hope that those who unfortunate to suffer future breaches are equally generous enough to share their logs and lessons learnt for the rest of us to understand and adapt for our own systems. The attackers share their tips and tricks, as anyone looking at the uploaded chat logs to public sites like pastebin can attest to this. We need the very smart folks looking after the security at theses attacked companies, that can step up, to take time to write up what really happened is going to make it accessible for the rest of us to learn from.
Seeing the events of an attack in recorded in log files is a terrible, yet beautiful thing. To me it means we, as defenders, did one thing right since detection is always a must. If the attack couldn't or wasn't blocked, then being able to replay how a system was compromised is the only way forward to stopping it from occurring again.
Logs review should be a intrinsic routine performed by everyone, daily if possible. Whether it be a visual, line by line review* or by using grep, a simple batch script or a state-of-the-art security information and event management system to parse the logs in to an easy to read and digest format for even a novice IT person to review and understand. This should be part of the working day process for all levels of support and security staff; drinking that morning coffee while flicking through the highlights of systems should be part of the job description.
Log files need to easy to understand and get information from. As someone who works with huge Windows IIS logs files, automation is your friend here. Jason Fossen's Search_Text_Log.vbs script [2] is a great starting point for scripters or for a more dynamic analysis tool Microsoft's log parser [3] is well worth taking the time to get to grips with. As an example of some of the information you can extract from IIS logs have a read here [4] see how easy it is to pull pertinent data and this blog piece [5] has a excellent way to get visual trending IIS data.
If log analysis isn't something you do much of, then a marvellous way to get some practice in is from this Honeynet.org challenge [6]
It's important to note logging has to be enabled on your systems, set up and reviewed to produce useful information. Multiple logging sources have to be using the same time source, to make correlation easy, so take the time to make sure your environment is configured and logging correctly before you need to review the logs for an incident.
As always, if you have any suggestions, insights or tips please feel free to comment.
[1] http://blog.barracuda.com/pmblog/index.php/2011/04/26/anatomy-of-a-sql-injection-attack/
[2] http://www.isascripts.org/scripts.zip
[3] Download log parser from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en
[4] http://blogs.iis.net/carlosag/archive/2010/03/25/analyze-your-iis-log-files-favorite-log-parser-queries.aspx
[5] http://blogs.msdn.com/b/mmcintyr/archive/2009/07/20/analyzing-iis-log-files-using-log-parser-part-1.aspx
[6] http://www.honeynet.org/challenges/2010_5_log_mysteries
* for you own time management, eyesight and frankly sanity try to avoid this.
Chris Mohan --- Internet Storm Center Handler on Duty
Comments
Maybe we could begin compiling some tips as to WHAT to look for in logs, not just "do you look at your logs?" Here are a few to start with:
1. Know what traffic is permitted INTO your network. Set a firewall filter to display all accepted traffic from non-approved sources. This can alert you to someone changing a rule and not letting you know to a rule having unintended side effects.
2. Have a very restrictive outbound rule and then monitor all traffic trying to exit the network and getting dropped. This usually will be some misconfigured Windows system but occasionally can be a malware-infected system.
3. Make sure all of your systems sync to an internal time server. Then restrict access to Internet time servers. Monitor for attempts to sync time via the Internet from unknown sources. We've detected unauthorized consumer wireless access points this way. Those home routers usually try to time sync to the Internet, so that can be an indirect way of detecting them. More likely it's another misconfigured Windows system, though.
JJ
Jun 20th 2011
1 decade ago
DN
Jun 20th 2011
1 decade ago
Hugh
Jun 20th 2011
1 decade ago
Kilroy
Jun 20th 2011
1 decade ago
There's a "free" limited version too.
Michael
Jun 20th 2011
1 decade ago
Overwhelmed
Jun 20th 2011
1 decade ago
It's time well spent - after "expense", management also understands words like "breach" and "compromised" fairly well and if you get put on the spot after an incident you'll feel like a complete idiot if you try to tell them you didn't have time to implement some rudimentary log checking, and you'll be a lot better armed to come back and say well, we did everything in our power right and we still got owned, but here's what we can do better next time.
Genima
Jun 20th 2011
1 decade ago
http://www.jigsolving.com/jigsovling/lost-vb-scripts-jason-fossens-isacripts-org-script-zip-file-can-be-found-here
Cheers
Genima
Jun 10th 2017
7 years ago