OS X VPN vuln; XSS from unexpected places; some ramblings on storage
MacOS X vpnd vulnerability disclosed
A local buffer overflow vulnerability in Apple MacOS X was disclosed on he vulnwatch mailing list today. The vpn demon, suid by default, can reportedly be exploited to execute arbitrary code by a local nonprivileged user.
CVE ID CAN-2005-1343 has been assigned to this. Apple Security Update 2005-005 fixes this and other issues. http://docs.info.apple.com/article.html?artnum=301528
XSS from unexpected places
A friend of the ISC passed along an email discussing code insertion showing up in whois records. Whois servers maintain contact information for registered domains and ip address allocations. Through the use of '<script>' tags, code can be executed by browsers when viewing those records.
Similar unexpected sources of code may include network traces, system logs, anything that can contain text from potentially untrusted origins. If you're using Mozilla Firefox as a browser you can use the "prefbar" extension to quickly turn on and off java/javascript/flash without navigating menus. Makes applying POLP a lot more convenient. Principle of Least Privilege.
Some ramblings on storage
Someone emailed asking for recommendations on storage of security logs data and I though this might be useful for others facing similar planning challenges -
We keep data around for three reasons: Someone important says we need
to, we are going to use the data later, or we just like having it or
are too lazy to clean it out. A major goal in any storage project
should be to get that down to the first two. Although I haven't
gathered empirical evidence to support this, I'll venture that a
large percentage of our storage challenges are the result of number
three. That can most readily be dealt with through self discipline
and/or draconian quotas. Let's examine the first two a little more.
First, we need to contend with regulatory obligations. In the case of
SOX or HIPAA, there are plenty of folks making money by helping
companies out with compliance, since there are fairly healthy data
retention and security requirements with both. For my research, I am
required by the funders to keep all raw data on hand for at least
three years - we are building yet another data library at this time.
You may have other requirements - speak with your compliance officer
so you can be sure to place the blam^H^H^H^H^H^H^H^H^H^H^H^H^Hcover
all the bases.
The secondary concern is usefulness. The usefulness of a record
depreciates with time, as its contextual relevance decays. In other
words, as a piece of data becomes less likely to be correlated with
new events, its (other than compliance) value shrinks. If you are
doing no analysis at all, then the usefulness of any logs is nil.
If you see something in your firewall logs and wish to correlate with
the past month's worth of logs, they had better be available. Without
anyone doing such analysis, the month of logs really isn't all that
useful. A grocery store owner probably doesn't need to keep those
firewall logs very long, same goes for her video surveillance tapes
(she'll know prety quickly whether or not those need reviewing!) Her
stock ledgers and employee records are likely to be held onto for
years, as there may be government bodies who say so, or she
recognizes the efficiency gains that she can reap by correlating
data and trending "shrinkage".
Aggregate analysis of data (bandwidth percentiles, IDS alert class
frequency, storage utilization, etc.) requires raw metrics be retained
until that aggregation takes place. After that, the raw data can be
discarded if you will not be doing anything further with it. Any
meta-analysis using the monthly aggregates will neccesitate storage &
retrieval of those aggregates, until *they* are no longer needeed. And
so on, and so on. Essentially, it is a distilling process, where you
boil off what isn't needed any longer, while the essence of it remains
available for consumption or later cooking.
With network traffic, this distillation process often starts at the
router, returning netflows rather than raw packet data. In the case of
IDS, often raw packet captures are fed through some rule-based engine
and its findings are reported, sometimes including the specific
libpcap records that are related to an alert event. Once you have this
data you then draw pretty graphs, give your monthly briefing, and earn
a nice bonus. Then you can discard the raw data, keeping the graphs
for the quarterly summary. However, if Kirby from accounting is
colorblind and you are likely to be asked to recreate them as bar
graphs instead of pie charts, you may want to hang onto the raw data a
little longer, at least until after the quarterly.
I personally am a packet geek. I love to roll around in them and get
dirty as much as possible, but I recognize their value diminishes
pretty rapidly, so I generally go with full packets at first, then
reduce them with editcap to 128 bytes after 48 hours, then to 48 bytes
after a month. The statistics pulled from them I keep around for a
year, then they are overwritten.
Back to your question, "any pointers?". I advise a strategy along
these lines:
1. Identify any regulatory requirements
2. Examine current and anticipated analysis methods
3. Identify data required to support #2 and its retrieval frequency
4. Codify what you find in #2 into policy so folks (and you) don't go
hog wild
5. Factor in organizational growth, including system deployment,
bandwidth, etc.
6. Factor in privacy and security needs - crypto is good, but adds
bits
7. Given the above, calculate online, nearline & offline storage
needs
8. Double #7
9. Talk to competing storage vendors about meeting #7 AND demand
performance demos. They have been helping others meet sophisticated
needs for some time and probably can help sanity check your proposed
strategy, (maybe without paying them for the consulting!). Get
those demos, onsite if possible. Don't settle for white papers,
someone else's reviews, etc. If they want your $ they'll work for
it.
10. Once the system(s) is in place, schedule regular emergency
procedure drills. Expect there to be fires, power failures, broken
water pipes, physical and online intrusions, etc. You may want to
consider this during the planning phases, as well.
A few general notes:
- Software compression may seem like a good idea, but in production
more aggressive algorithms kill your performance. Go for hardware
compression on your LTO drives for offline stuff.
- For large volumes of integrity-critical data, take a look at some of
the new CAS and hard-disk WORM technologies. Expensive, but worth it.
Oh, yeah - don't get too wrapped up in the MD5 vs. SHA1 arguments.
Some CAS vendors will claim to be "more secure" since they use SHA1.
Yes, there is an academic proof for MD5 collisions, but in reality
your data will be long-ago worthless by the time someone can exploit
that on a real storage system.
- End to end security is essential. Don't forget about offline and
offsite storage, including physical site and transportation. Don't let
a disgruntled minimum wage courier be the weakest link in your
organization's data confidentiality. I've seen it happen.
Cheers!
g
Keywords:
0 comment(s)
×
Diary Archives
Comments