Tip of the Day: Surviving the monthly patch cycle
Yesterday we announced we would write about tips on how people patch their systems after a Black Tuesday. Since Mike is apparently suffering from a withdrawal symptoms after defcon, his fellow handlers will do the honors.
There are basically a few tactics to this in use. What strikes me in the responses we got: most of those writing in value not breaking applications significantly more than patching before you get hit with an exploit. Perhaps there is a lot work left to be done in order to convince (upper) management of the risks of patching late as patching even an hour after the worm or the targeted exploit hit you might cost the company significantly more than losing a few hours left and right over a not so critical system not being 100% healthy with a new patch.
This group includes by necessity also most home users as they lack other means to patch. At best they can wait till others got some problems, but if all do that, it won't work and expose you for longer.
Test on limited scale, roll out carefully
You can use the Microsoft tools like WSUS and delay the rollout a few hours to make sure a few test systems survive and can run the critical applications. Typically smaller organizations use this with great success for the masses of general-purpose client machines.
Reader Ken wrote: "As we all know, patching any kind of operating system or application is fraught with dangers. In my environment, I don't have the luxury of a full test environment that I would love to have in order to be able to test each patch against all the applications and services in use. But that is just not possible with a limited budget.
In order to minimize the risk of a patch causing harm, I apply patches first to a set of known systems. The first system is my own workstation. I'd rather have it crash there than one of my coworkers' systems. After a day or so, the patches are then deployed to a subset of the systems (about 10) in the office. Finally, if there are still no issues found and no problems have been reported on sites such as the Internet Storm Center or on any of the security lists, the patches are distributed to all systems.
I actually use two tools for patch management. The first is Microsoft's WSUS service. I have all systems pointing there in order to get their updates. There are a couple of advantages. The first is Internet bandwidth usage. The patches are only downloaded once for all the systems. This can be a major savings in terms of time and bandwidth. Second, I can specify how and when the patches are applied via a GPO. Third, I control which patches are installed. If there is a known problem with a specific patch, I can just not release just that one patch to the users. Finally, I can get a status on which patches are applied and what systems have had problems installing the patches. The other tools is Shavlik's NetChk. This tool allows me to deploy a number of non-Microsoft Windows application patches and also to verify that patches are indeed being installed.
I use a similar process when it comes to apply patches to UNIX systems. First my own system, then a subset of system and finally all the systems.
So far, I have not had a major problem when it has come to applying patches. In almost every case where there was an issue, it surfaced within the smaller group of systems and the disruption was minimized.
Of course patching is not the only line of defense. I also have NIDs, firewalls, proxy web servers, virus scanning and log monitoring in place to try to reduce the risk to the office. Also recently, user information security awareness sessions have been started within the organization. This helps bring the users into the equation of defending the company against malicious software and web sites."
Mike wrote in on their strategy: "Simple strategy really:
Personally I like the last line of his comment as it show they are trying to balance the heavy testing scheme with a fast track for getting those "PATCH NOW" patches out.
We had one such anonymous submission: "On the day after Black Tuesday, a task force meets to discuss the recently released patches. There is a set of ~100 users who represent all applications used. They get the patches via MS SMS to test. Once they verify their apps still function as expected, the patches are sent out via SMS each week to four predefined patch groups. This process lasts a month. Lather. Rinse. Repeat".
Let's not forget that one of the reasons that getting Microsoft to release patches slow -aside from the obvious marketing impact- is that they test these patches. So you only get tested patches to start with ...
Thanks to all those writing in!
--
Swa Frantzen -- Section 66
There are basically a few tactics to this in use. What strikes me in the responses we got: most of those writing in value not breaking applications significantly more than patching before you get hit with an exploit. Perhaps there is a lot work left to be done in order to convince (upper) management of the risks of patching late as patching even an hour after the worm or the targeted exploit hit you might cost the company significantly more than losing a few hours left and right over a not so critical system not being 100% healthy with a new patch.
Just patch
The folks doing this, take the risk and let the patches roll out in their organization. They expect a few systems to fail somehow more or less randomly and will deal with them as they go. Should one of the patches prove to be incompatible with one of their critical applications, they will deal with them at that point.This group includes by necessity also most home users as they lack other means to patch. At best they can wait till others got some problems, but if all do that, it won't work and expose you for longer.
Test on limited scale, roll out carefully
You can use the Microsoft tools like WSUS and delay the rollout a few hours to make sure a few test systems survive and can run the critical applications. Typically smaller organizations use this with great success for the masses of general-purpose client machines.Reader Ken wrote: "As we all know, patching any kind of operating system or application is fraught with dangers. In my environment, I don't have the luxury of a full test environment that I would love to have in order to be able to test each patch against all the applications and services in use. But that is just not possible with a limited budget.
In order to minimize the risk of a patch causing harm, I apply patches first to a set of known systems. The first system is my own workstation. I'd rather have it crash there than one of my coworkers' systems. After a day or so, the patches are then deployed to a subset of the systems (about 10) in the office. Finally, if there are still no issues found and no problems have been reported on sites such as the Internet Storm Center or on any of the security lists, the patches are distributed to all systems.
I actually use two tools for patch management. The first is Microsoft's WSUS service. I have all systems pointing there in order to get their updates. There are a couple of advantages. The first is Internet bandwidth usage. The patches are only downloaded once for all the systems. This can be a major savings in terms of time and bandwidth. Second, I can specify how and when the patches are applied via a GPO. Third, I control which patches are installed. If there is a known problem with a specific patch, I can just not release just that one patch to the users. Finally, I can get a status on which patches are applied and what systems have had problems installing the patches. The other tools is Shavlik's NetChk. This tool allows me to deploy a number of non-Microsoft Windows application patches and also to verify that patches are indeed being installed.
I use a similar process when it comes to apply patches to UNIX systems. First my own system, then a subset of system and finally all the systems.
So far, I have not had a major problem when it has come to applying patches. In almost every case where there was an issue, it surfaced within the smaller group of systems and the disruption was minimized.
Of course patching is not the only line of defense. I also have NIDs, firewalls, proxy web servers, virus scanning and log monitoring in place to try to reduce the risk to the office. Also recently, user information security awareness sessions have been started within the organization. This helps bring the users into the equation of defending the company against malicious software and web sites."
Test applications thoroughly
Testing applications to the end is next to impossible, you at the very best can test a few critical operations in your application and will have to gain trust it at some point. This approach is more often used on critical servers. The big drawback to this is that it takes time and resources to get this done. But in those cases where you end up with an incompatible patch you gain the pain of rolling back out incompatible patches and having to restore potential damage.Mike wrote in on their strategy: "Simple strategy really:
- obtain patches, vet requirements and deploy to a QA environment, containing like for like hosts; exchange, SQL, IIS, workstation builds etc
- test, monitor, test, monitor...
- deploy to a pre-production group
- monitor, monitor, monitor
- deploy to primary production group
- monitor
- push out to remaining hosts/workstations.
Personally I like the last line of his comment as it show they are trying to balance the heavy testing scheme with a fast track for getting those "PATCH NOW" patches out.
Fully features planned rollouts
Some organizations might (need to) plan ahead all their patching and actually do a roll-out plan that covers a long time before they come full circle and start over.We had one such anonymous submission: "On the day after Black Tuesday, a task force meets to discuss the recently released patches. There is a set of ~100 users who represent all applications used. They get the patches via MS SMS to test. Once they verify their apps still function as expected, the patches are sent out via SMS each week to four predefined patch groups. This process lasts a month. Lather. Rinse. Repeat".
Divide and conquer
A well-known strategy from the real world can be used to divide the to be patched machines in different categories and tackle each differently. E.g.:- The general clients, not mission critical, could be patched as soon as the patches are available at Microsoft. This would yield some fall out left and right but just be ready to pick them up, those systems would get in trouble anyway. Why do we take the risks here? Well those systems might be your laptops that go the next day on a trip for 3 weeks and be used in the mean time in hotels, airports and other (potentially hostile) networks whithout a decent chance to get patched. Or they could be the laptop that takes off to a coffee serving place with annex hotspot and work there for a few hours, exposing themselves to any other visitor there. It gets worse if they pick up something evil and bring it home to a network of unpatched systems ...
- The servers that are not mission critical, you could try to wait for the not so "PATCH NOW" patches, and roll them out if you see no problems reported, or you could just roll them out and be ready to roll back if you see problems. After all, it's not mission critical.
- Mission critical servers should have many layers protecting them from evil, even from internal users. They should also not be exposed to most of the internal machines and they could remain unpatched or even isolated for a long while, till you get the chance to run the mission critical tests in a QA lab and roll out the patches being certain they don't break anything.
Let's not forget that one of the reasons that getting Microsoft to release patches slow -aside from the obvious marketing impact- is that they test these patches. So you only get tested patches to start with ...
Thanks to all those writing in!
--
Swa Frantzen -- Section 66
Keywords: ToD
0 comment(s)
×
Diary Archives
Comments