Taking a Shot at Reverse Shell Attacks, CNC Phone Home and Data Exfil from Servers

Published: 2021-02-01. Last Updated: 2021-02-01 14:17:44 UTC
by Rob VandenBrink (Version: 1)
0 comment(s)

Over the last number of weeks (after the Solarwinds Orion news) there's been a lot of discussion on how to detect if a server-based applcation is compromised.  The discussions have ranged from buying new sophisticated tools, auditing the development pipeline, to diffing patches.  But really, for me it's as simple as saying "should my application server really be able to connect to any internet host on any protocol".  Let's take it one step further and say "should my application server really be able to connect to arbitrary hosts on tcp/443 or udp/53 (or any other protocol)".  And when you phrase it that way, the answer really should be a simple "no".

For me, fixing this should have been a simple thing.  Let's phrase this in the context of the CIS Critical Controls (https://www.cisecurity.org/controls/)
CC1: server and workstation inventory
CC2: software inventory 
(we'll add more later)

I know these first two are simple - but in your organization, do you have a list of software that's running on each of your servers?  With the inbound listening ports?  How about outbound ports that connect to known internet hosts?
This list should be fairly simple to create, figure a few minutes to hour or so for each application to phrase it all in terms that you can make firewall rules from

CC12:
Now, for each server make an egress filter "paragraph" for your internet facing firewalls.  Give it permission to reach out to it's list of known hosts and protocols.  It's rare that you will have hosts that need to reach out to the entire internet - email servers on the SMTP ports are the only ones that immediately come to mind, and we're seeing fewer and fewer of those on premise anymore these days.
Also CC12:
So now you have the list of what's allowed for that server.  Add the line "permit <servername> any ip log" - in other words, permit everything else, but log it to syslog.  Monitor that server's triggered logs for a defined period of time (a day or so is usually plenty).  Be sure to trigger any "update from the vendor" events that might be part of any installed products.  After that period of time, change that line to "deny <servername> any ip log", so now we're denying outbound packets from that server, but still logging them.

What about my Linux servers you ask?  Don't they need all of github and every everything in order to update?  No, no they do not.  To get the list of repo's that your server reaches out to for upgrades:

sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists... Done

robv@ubuntu:~$ cat /etc/apt/sources.list | grep -v "#" | grep deb
deb http://us.archive.ubuntu.com/ubuntu/ focal main restricted
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main restricted
deb http://us.archive.ubuntu.com/ubuntu/ focal universe
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates universe
deb http://us.archive.ubuntu.com/ubuntu/ focal multiverse
deb http://us.archive.ubuntu.com/ubuntu/ focal-updates multiverse
deb http://us.archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu focal-security main restricted
deb http://security.ubuntu.com/ubuntu focal-security universe
deb http://security.ubuntu.com/ubuntu focal-security multiverse

(this lists all sources, filters out comment lines, and looking for "deb" nicely filters out blank lines)

Refine this list further to just get the unique destinations:

robv@ubuntu:~$ cat /etc/apt/sources.list | grep -v "#" | grep deb | cut -d " " -f 2 | sort | uniq
http://security.ubuntu.com/ubuntu
http://us.archive.ubuntu.com/ubuntu/

So for a stock Ubuntu server, the answer is two - you need access to just two hosts to do a "direct from the internet" update. Your mileage may vary depending on your configuration though.

How about Windows?  For a standard application server, the answer usually is NONE.  You likely have an internal WSUS, SCCM or SCOM server right?  That takes care of updates.  Unless you are sending mail with that server (which can be limited to just tcp/25, and most firewalls will restrict to valid SMTP), likely your server is providing a service, not reaching out to anything.   Even if the server does reach out to arbitrary servers, you can likely restrict it to specific destination hosts, subnets, protocols or countries.

With a quick inventory, creating a quick "stanza" for each server's outbound permissions goes pretty quickly.  For each line, you'll be able to choose a logging action of "log", "alert" or "don't log".  Think about these choices carefully, and select the "don't log" option at your peril.  Your last line for each server's outbound stanza should almost without fail be your firewall's equivalent of "deny ip <servername> any log"

Be sure that your server change control procedures include a "after this change, does the application or server need any additional (or fewer) internet accesses?"

The fallout from this?  Surprisingly little.  

  • If you have administrators who RDP to servers, then use the browser on that server for support purposes, this will no longer work for them.  THIS IS A GOOD THING.  Browse to potentially untrusted sites from your workstation, not the servers in the server VLAN!
  • As you add or remove software, there's some firewall rule maintenance involved.  If you skip that step, then things will break when you implement them on the servers.  This "tie the firewall to the server functions" step is something we all should have been doing all along.
  • But I have servers in the cloud you say?  It's even easier to control outbound access in any of the major clouds, either with native tools or by implementing your <insert vendor here> cloud based or virtual firewall.  If you haven't been focused on firewall functions for your cloud instance, you should drop your existing projects and focus on that for a week or so (seriously, not joking).
  • On the plus side, you'll have started down the path of implementing the Critical Controls.  Take a closer look at them if you haven't already, there's only good things to find there :-)
  • Also on the plus side, you'll know which IP's, subnets and domains that your purchased applications reach out to
  • Just as important, or even moreso - you'll have that same information for your in-house applications.
  • Lastly, if any of your hosts or applications reach out to a new IP, it's going to blocked and will raise an alert.  If it ends up being reverse-shell or C&C traffic, you can definitively say that you blocked that traffic.  (score!)
  • Lastly-lastly - treat denied server packets as security incidents.  Make 100% sure that denying this packet breaks something before allowing it.  If you just add an "allow" rule for all denied packets, then you'll at some point just be enabling malware to do it's best.

For most organizations with less than a hundred server VMs, you can turn this into a "hour or two per day" project and get it done in a month or so.

Will this catch everything?  No you still need to address workstation egress, but that's a do-able thing too (https://isc.sans.edu/forums/diary/Egress+Filtering+What+do+we+have+a+bird+problem/18379/).  Would this have caught the Solarwinds Orion code in your environment?  Yes, parts of it - in most shops the Orion server does not need internet access at all (if you don't depend on the application's auto-update process) - even with that, it's a short "allow" list.  And if the reaction is to treat denied packets seriously, you'd have caught it well before it hit the news (this was a **lengthy** incident).  The fact that nobody caught it in all that time really means that we're still treating outbound traffic with some dangerous mindsets "we trust our users" (to not make mistakes), "we trust our applications" (to not have malware) and "we trust our server admins" (to not do dumb stuff like browse from a server, or check their email while on a server).  If you read these with the text in the brackets, I'm hoping you see that this really should be mindsets we set aside, maybe we should have done this in the early 2000's!  This may seem like an over-simplification, but really it's not - this approach really does work.

If you've caught anything good with a basic egress filter, please share using our comment form (NDA permitting of course).

Referenced Critical Controls:

CC1: Inventory and Control of Hardware Assets (all of it, if you haven't done this start with your server VLAN)
CC2: Inventory and Control of Software Assets (again, all of it, and again, start with your server VLAN for this)
CC7.6 Log all URL requests from each of the organization's systems, whether on-site or a mobile device, in order to identify potentially malicious activity and assist incident handlers with identifying potentially compromised systems.9.1 Associate active ports, services, and protocols to the hardware assets in the asset inventory.
CC9.4 Apply host-based firewalls or port-filtering tools on end systems, with a default-deny rule that drops all traffic except those services and ports that are explicitly allowed.
CC12.4 Deny communication over unauthorized TCP or UDP ports or application traffic to ensure that only authorized protocols are allowed to cross the network boundary in or out of the network at each of the organization's network boundaries.
CC12.5 Configure monitoring systems to record network packets passing through the boundary at each of the organization's network boundaries.

 

===============
Rob VandenBrink
rob@coherentsecurity.com

0 comment(s)
ISC Stormcast For Monday, February 1st, 2021 https://isc.sans.edu/podcastdetail.html?id=7352

Comments


Diary Archives