My next class:
Reverse-Engineering Malware: Advanced Code AnalysisOnline | Greenwich Mean TimeOct 28th - Nov 1st 2024

"Blocked" Does Not Mean "Forget It"

Published: 2018-05-24. Last Updated: 2018-05-24 07:16:52 UTC
by Xavier Mertens (Version: 1)
3 comment(s)

Today, organisations are facing regular waves of attacks which are targeted... or not. We deploy tons of security controls to block them as soon as possible before they successfully reach their targets. Due to the amount of daily generated information, most of the time, we don’t care for them once they have been blocked. A perfect example is blocked emails. But “blocked” does not mean that we can forget them, there is still valuable information in those data.

Tons of emails are blocked by your <name_your_best_product> solution and you’re feeling safe. Sometimes, one of them isn’t detected and is dropped in the user’s mailbox but you have an incident handling process or the user simply deletes it because he/she got a security awareness training. Everybody is happy in this wonderful world.

What if your organization was targeted and spear phishing emails were received and (hopefully) blocked? A good idea is to review those blocked emails on a daily basis and to search for interesting keywords that could indicate a specifically crafted message targeting the organization. 

Interesting keywords to search for could be:

  • Your domain names

  • Your brands

  • Terms related to your business (health, finance, government, …)

  • ...

If such messages are detected, they could be a good indicator that something weird will happen and to take appropriate actions like raising your SOC DEFCON[1] level or proactively warn users that spear phishing campaigns are ongoing.

Stay safe!

[1] https://en.wikipedia.org/wiki/DEFCON

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

3 comment(s)
My next class:
Reverse-Engineering Malware: Advanced Code AnalysisOnline | Greenwich Mean TimeOct 28th - Nov 1st 2024

Comments

Definitely! If your logserver allows it, you can add rules for interesting things to look for. For instance, my own logserver uses elasticsearch and kibana and a syslog daemon I wrote in nodejs. It allows me to make log analysis modules that look for sequences of events or specific events and logs it's own events. So I made a module to watch for email matching specific patterns a phisher targeting our Japanese offices that was long the lines of:
srcHost:*.google.com AND (subject:(*apple* *itunes* *icloud*) OR fromUser:(postmaster *apple*) OR fromDomain:*apple*))

You get the idea... So even though we were already blocking 99% of these phish using stuff in the message body (he always used URL shortening services) these log analysis rules meant I could also identify new fake-apple domains he'd registered sooner rather than later and those went right into our email and DNS filters. Occasionally when one DID leak through the odds were fair that we were already blocking the domain used in his chain of URL redirections.
Thanks for sharing!
The last phish ran across was a man in the middle proxy web relay server if you will that was careful enough to specifically mimic the target site while preg_replace or sed'ing the site thru the mitm. Having exact look a like logins through the target sites own modals. Even replicated the 2fa, with only one catch. I'm no idiot and the phisher left his keys in the js that saved the logins to his db, along with showing me what ip port and protocol...

Is it wrong or illegal to login to a phish site a million times a minute with a stored db of generic usernames and passwords?

Diary Archives