We are still in the initial stages of
this, and even that has something to teach many people; namely, how
difficult it is to do incident response. Security workers are
scrambling, but they should at least know something about it.
However, that portion of the general public who are at least somewhat
aware that the Internet is a dangerous place are now doing incident
response. They just don't know or use the term.
Amongst that group, there has been much
anguish about public site-checking tools being overloaded, the
public-facing bits of Google being reported as vulnerable (no, just a
bit of code they ship, and that's been patched), etc. That is
compounded by those who know enough to check certificate details in
their browser, find a certificate that predates Heartbleed, but don't
know that a certificate can be re-keyed without changing the dates.
Which is only to be expected. The security community should be happy
that there are so many users out there who will at least look at
certificates. That is in some respects a breath of fresh air, given
how very little we have had with user education, given that the
popularity of 'password123' which is revealed with every mass breach
of a popular site.
I am more concerned that the
professionals may not be getting incident response right, on at least
two fronts.
Some major sites (and not only those
which face the general public) do not seem to be willing to tell
their users that there is a problem, and that a password change will
be in order as soon as patching is complete. Given that the rate of
password reuse, across sites if disparate sensitivity, has always
been horribly high, I regard this as an ethics issue. This would be
an ideal time to communicate both rapidly and effectively.
Secondly, there is the matter of
embedded systems, and/or appliances. These tend to be the bits that
are the last to be patched. If they are ever patched; in some cases
they seem invisible to their owners. If you operate something like a
VPN-enabled SAN, I would expect a timely fix from the vendor. At
which time you may be only beginning the real IT work, of course.
However, surprisingly many breaches
occur over a connection that an enterprise did not know it even had.
This has been true since a T1 was a cutting-edge connection, and is
even more true today, as connections have grown much cheaper. Even
known connections may have their own problems, which may been a
factor in the 2013 Target breach.
But, what about those less-expensive
connections, which may feed through a quick-and-dirty bit of
hardware? Until proven otherwise, you should assume that the security
of these devices is miserable. I have private keys for what seems to
be 6565 combinations of hardware/firmware combinations in which SSL
or SSH keys were extracted from the firmware. In that data, 'VPN'
appears 534 times. [1] There is also an ongoing series of revelations
of hardwired admin account/password combinations. While much of this
gear is consumer-grade (these people just cannot catch a break) the
scale of this problem is large. Vendors include:
- Cisco
- DD-WRT
- D-Link
- Linksys
- Netgear
- OpenRG
- OpenWRT
- Polycom
Now take even a minor step up the cost
ladder. Assume (and this is demonstrably not justified) that neither
keying material nor passwords are burned in. We know that Heartbleed
allows access to random 64K blocks of memory. We know that private
keys can be exposed, but it does not seem to be as widely known (though it should be obvious) that this is more likely soon after
reboot.
You might want to look at your traffic
flows again, and look for unexplained crashes or reboots of odd edge devices.
There are probably many vulnerabilities out there that can generate a
crash, but are not yet, or could not be, before Heartbleed, fully
weaponized. This is a window of opportunity for the Bad Guys to
capitalize on a serious information leak from devices that may have
fallen through the cracks of your monitoring system.
[1] The firmware data source
This information has been publicly available
since at least 2011, at https://code.google.com/p/littleblackbox/. You should be prepared to do a static
build from source code, and explore an sqlite3 database. Props to /dev/ttys0 for
revealing exactly how terrible this situation is.
Update 4/17/14
http://fubarnorthwest.blogspot.com/2014/04/heartbleed-openssl-libraries-are-what.html
Update 5/13/15
Google Code is shutting down. littleblackbox has migrated to GitHub. There is a bit of Additional commentary.
No comments:
Post a Comment
Comments on posts older than 60 days go into a moderation queue. It keeps out a lot of blog spam.
I really want to be quick about approving real comments in the moderation queue. When I think I won't manage that, I will turn moderation off, and sweep up the mess as soon as possible.
If you find comments that look like blog spam, they likely are. As always, be careful of what you click on. I may have had moderation off, and not yet swept up the mess.