Friday, April 18, 2014

Well, I Thought I Had a Spare Drive

But it turns out that my spares collection is FUBAR. Here's the thing. I was planning on rebuilding a lab system this weekend. I have always enjoyed messing with hardware of one sort or another, I've built many systems over the years, and I have a bit of an ingrained work flow optimized for minimal SOHO office disruption, etc. Move that pile of books out of the corner and re-shelve, fill in the space with the parts collection, etc.

The parts collection was where I ran into a bit of snag.

I get a bit uncomfortable when I am down to a single spare general-purpose SATA drive. For RAID, things are a bit more complex, as I like matching firmware versions, and I also like to keep a pristine portable drive around. What I have is a new factor, in that I can't use a basic 1 TB Caviar Black. It's been sitting for several years, still in original packing. The only thing I've ever done regarding this drive was to print the output of uuidgen(1) and purchase details, and stick it into the bag. Due to the nature of my work, I need to unambiguously track which hard drives are used by which systems. Which is a topic for another post.

Now, this drive has to be sequestered, because I seem to have built up a collection of performance notes which involved I/O to an identical drive. I may want to revisit those numbers, well after the normal life of a drive, and it is worth it, to me, to remove a variable from any comparison I might care to make. An opportunity to reduce the complexity of a future data analysis, for the cost of a consumer hard drive, is a no-brainer.

So I'm down to one consumer hard drive. If I use it, I will have no spare until a replacement arrives. That isn't something I would do, except in an emergency, which this isn't. I could just drive into town, buy one, and get on with the job. But it is probably wiser to buy three online; the odds of getting identical firmware are good, so I get more down-the-road flexibility. I could make a reasonable case for buying six, and may do that.

That's a bit of a bummer, as I did want to get that hardware upgrade done this weekend. If, for whatever reason, it can't happen next weekend, then it is no longer a casual, do it ahead of the need thing. It rises to the level of Important. Murphy is alive and well, and all it will take is a call from a client.

This sort of thing is best avoided, and that it happened in this case is my own FUBAR. I failed to allow for all the factors that might influence the effective size of the spares pool. That informal work flow bit me, and better systems administrators than I can now commence laughing. Lesson learned, and I hope that others do not make the same mistake.











Thursday, April 17, 2014

Heartbleed: The OpenSSL Libraries are What Matter

Much of what has been published in the security trade press assumes that recommending public vulnerability-checking tools, intended to be pointed at Web servers, is the way to go. It seems to me that that that take involves unwarranted assumptions. First off, many of the tools are buggy, or they are limitted to publicly-available servers.

I get that last bit: people want to know what their exposure is, and that reprsents progress.

However, OpenSSL is baked into many things. It is not all about Web sites, public-facing or otherwise. I don't like to put any sort of code on this blog, but in this case I am not going to make my point if I don't do a bit of command-line work. Let us take Fedora 19 as an example. It's a reasonable proxy for the forthcoming RHEL-7, and is of course RPM-based.

Let's do a test erase on openssl:
rpm -e --test openssl 
error: Failed dependencies:
        /usr/bin/openssl is needed by (installed) authconfig-6.2.6-3.fc19.1.x86_64

We need to get rid of that error message, if we are to feed this into 'wc -l' (wordcount, line mode), so we use 'grep -v' to ignore that line:
rpm -e --test openssl 2>&1 | grep -v 'error' | wc -l
1

Cool; only that one line pertaining to authconfig was counted. 

And here is the problem


Some people are replacing OpenSSL, without also replacing the library package. Hopefully, you have been able to follow along at a command prompt of your own. Before I get to the bad news, I need to tell you that this was performed on a Fedora 19 workstation. A server, built intelligently, will have far fewer dependencies (and I am long-term advocate of minimal systems), but this will still illustrate my point: the OpenSSL libraries are the priority item.

And now the bad news


Substituting only 'openssl-libs' for 'openssl', and repeating our line count, we find the badness.
rpm -e --test openssl-libs 2>&1 | grep -v 'error' | wc -l
256

I could yack about how to categorize the problems. That's an important topic, but is short of my immediate point, which is that openssl-libs is the more important of the two packages, and has received almost no mention in the security trade press. 

The next question is


How do I tell if I have upgraded everything?

There is no simple answer to that. Back-porting is an issue. The author of the controversial Hut3 Cardiac Arrest tool states, at http://cryptogasm.com/2014/04/hut3-cardiac-arrest-disclaimer/ that
"As always, the correct way to test for the vulnerability is to check the version of OpenSSL installed on the server in question. OpenSSL 1.0.1 through 1.0.1f are vulnerable."

That is not quite correct, in package-manager terms. Let's return to Fedora 19, and look through the package update logs, for our most important package:

grep openssl-libs /var/log/yum.log
Jan 11 08:41:00 Updated: 1:openssl-libs-1.0.1e-37.fc19.x86_64
Apr 09 13:33:53 Updated: 1:openssl-libs-1.0.1e-37.fc19.1.x86_64

Notice that the April 9 fix is still reported as version e. If we query the software directly, we see the following.

openssl version
OpenSSL 1.0.1e-fips 11 Feb 2013

Still reporting version e, though the fix was in place. RHEL-6 will react in a similar fashion. Analysis: another bogus report from some random security provider. That's nothing new.

There are means of inspected each system behind a corporate firewall. Yes, backports can be handled, and the reason for doing them in the first place does make a certain amount (all engineering decisions are trade-offs) of sense. It's just a bit much to go into in a single post. Those with short attention spans would be complaining that they didn't get a tl;dr version, expressed in 140 characters on Twitter.

But I'll leave ou with a final question or two 


Did your restart whatever running services use openssl-libs? Has everything been re-linked?


Sunday, April 13, 2014

Tweet? No.

I have very little to say that can be said in 140 characters.

I'm not saying Twitter is useless. If I were trying to meet some arbitrary group of people at a security convention, trade show, or whatever, it is useful to send one message and contact everyone.

But mobile (and Twitter) has given rise to things such as URL-shortening services, of which there are now many. Enough that tools they provide to resolve the final destination are not used. There are too many 'services'.

Think about this. Security practitioners have preached, for time out of mind, that you should not follow an unknown link. That has never really worked; people still fall victim to social-engineering attacks on a regular basis. This has worked about as well as strong-password advocacy. Which is to say, not at all.

Now we have people operating in the security profession, who are doing URL-shortening, and obfuscating link destinations. The arrogance is astonishing: give them any weird link, they will click it. The same thing that we said 'Do Not Do', we are now using as a marketing tool.

That is such a very, very broken idea. We are promoting the behaviors that history has proven will end in tears. Does any practitioner think that is an ethical thing to do?










Friday, April 11, 2014

Heartbleed Will Be With Us For a Long Time

We are still in the initial stages of this, and even that has something to teach many people; namely, how difficult it is to do incident response. Security workers are scrambling, but they should at least know something about it. However, that portion of the general public who are at least somewhat aware that the Internet is a dangerous place are now doing incident response. They just don't know or use the term.

Amongst that group, there has been much anguish about public site-checking tools being overloaded, the public-facing bits of Google being reported as vulnerable (no, just a bit of code they ship, and that's been patched), etc. That is compounded by those who know enough to check certificate details in their browser, find a certificate that predates Heartbleed, but don't know that a certificate can be re-keyed without changing the dates. Which is only to be expected. The security community should be happy that there are so many users out there who will at least look at certificates. That is in some respects a breath of fresh air, given how very little we have had with user education, given that the popularity of 'password123' which is revealed with every mass breach of a popular site.

I am more concerned that the professionals may not be getting incident response right, on at least two fronts.

Some major sites (and not only those which face the general public) do not seem to be willing to tell their users that there is a problem, and that a password change will be in order as soon as patching is complete. Given that the rate of password reuse, across sites if disparate sensitivity, has always been horribly high, I regard this as an ethics issue. This would be an ideal time to communicate both rapidly and effectively.

Secondly, there is the matter of embedded systems, and/or appliances. These tend to be the bits that are the last to be patched. If they are ever patched; in some cases they seem invisible to their owners. If you operate something like a VPN-enabled SAN, I would expect a timely fix from the vendor. At which time you may be only beginning the real IT work, of course.

However, surprisingly many breaches occur over a connection that an enterprise did not know it even had. This has been true since a T1 was a cutting-edge connection, and is even more true today, as connections have grown much cheaper. Even known connections may have their own problems, which may been a factor in the 2013 Target breach.

But, what about those less-expensive connections, which may feed through a quick-and-dirty bit of hardware? Until proven otherwise, you should assume that the security of these devices is miserable. I have private keys for what seems to be 6565 combinations of hardware/firmware combinations in which SSL or SSH keys were extracted from the firmware. In that data, 'VPN' appears 534 times. [1] There is also an ongoing series of revelations of hardwired admin account/password combinations. While much of this gear is consumer-grade (these people just cannot catch a break) the scale of this problem is large. Vendors include:

  • Cisco
  • DD-WRT
  • D-Link
  • Linksys
  • Netgear
  • OpenRG
  • OpenWRT
  • Polycom
Now take even a minor step up the cost ladder. Assume (and this is demonstrably not justified) that neither keying material nor passwords are burned in. We know that Heartbleed allows access to random 64K blocks of memory. We know that private keys can be exposed, but it does not seem to be as widely known (though it should be obvious) that this is more likely soon after reboot.

You might want to look at your traffic flows again, and look for unexplained crashes or reboots of odd edge devices. There are probably many vulnerabilities out there that can generate a crash, but are not yet, or could not be, before Heartbleed, fully weaponized. This is a window of opportunity for the Bad Guys to capitalize on a serious information leak from devices that may have fallen through the cracks of your monitoring system.

[1] The firmware data source


This information has been publicly available since at least 2011, at  https://code.google.com/p/littleblackbox/. You should be prepared to do a static build from source code, and explore an sqlite3 database. Props to /dev/ttys0 for revealing exactly how terrible this situation is. 

Friday, April 4, 2014

People Still On WinXP Enter the World of Pain

As the final Patch Tuesday nears, there is a critical vulnerability in Word 2003 SP1, 
which is currently being exploited. Look at the wrong RTF file, and you are pwned. This applies even if you are just using Word as a viewer in Outlook. It was important enough that Microsoft went outside their normal patch cycle. They don't like doing this, so although they couched it in terms of Microsoft Word 2010, stating that, “... we are aware of limited, targeted attacks directed at Microsoft Word 2010. ” I rather suspect the problem is either more widespread than this, or that sensitive targets have been exploited.

Another critical issue is a flaw in IE6. I have no information on this one, and I am too busy to dig around. Possibly it is not yet being exploited in the wild, though it would be madness to count on that.

My, My, How the World Has Not Changed. Ample industry stories point out the vast numbers of systems still running Windows XP. From enterprise code running behind corporate firewalls, to small businesses that simply cannot afford to upgrade, to home users who are not even aware that there might be a problem running a 13 year old OS, there is a lot of WinXP out there. We still do not do security updates particularly well.

Thirteen years ago I was at a Fortune 500 company, writing hardening scripts for HP-UX 11i (and probably preparing for an audit by HP Professional Services), and more scripts for Symark PowerBroker, doing quite a bit of Linux, and advocating that the new Intranet (remember that term?) should not be rolled out as an IE6-only service. In today's world, HP-UX is still somehow hanging on (albeit by a thread), Linux has advanced to the point that even Microsoft has to accept it, and organizations that deployed those old Internet Explorer 6 apps are now facing the downside of that decision.

In the final analysis, there is almost no metric that shows any overall improvement in the security landscape. Quite the reverse, actually. That doesn't mean that it is impossible. It does mean that some triage is necessary, and how you approach the problem matters as much as ever. Enterprises with great security needs might invest in mechanisms supporting better decision-making related to security trade-offs, but they will also be subject to a broad spectrum of employees, including those hapless home users still on WinXP, and unaware that there is a problem. Some small business owners may simply re-partition a small network, install a firewall and/or proxy server, and quite successfully get on with things.

So, no. The world has not changed. While there will be an uptick in the threat level, we simply need to make more thoughtful decisions, and do some of the things that we already know how to do, but haven't. To the extent that we get better at doing that, any uptick in the threat level brought about by the WinXP EOL might be considered a Good Thing. It was known well in advance, and could be planned for; this was no Black Swan.

I Am Not Claiming This Will Have No Impact


It will. On a personal note, I was planning to work this weekend (when it will be rainy) with the hope of taking Monday and Tuesday off. When we might get our first 70-degree days of spring. This is Oregon; cold rainy springs are common. I have a certain amount of flexibility, and would be a fool to not attempt to make that trade. I would be a greater fool to expect success. So, I won't be going far, and will have my phone in my pocket.

Update 4/8/2014 

It turns out that there were problems today. Not what I expected, of course. It never is; that is the nature of the job. This was nothing to do with WnXP EOL, but Automated Data Analysis Gone Horribly Wrong. ADA is a very tricky thing to get right. Setting limits blindly (which is what you are doing if you get the data analysis bit wrong, and even then you may be solving the wrong problem) can end in tears; your own systems can be your worst nightmare.

In this case it ended well. It wasn't very difficult to prove a false positive, and there was a systems admin who knew pretty much everything about how a complicated system was put together.

You know that old saying, "There's one in every crowd?" It's always a negative thing, but it shouldn't be. Sometimes that one person is worth their weight in gold, particularly when things go all pear-shaped. If you are not making a solid effort to identify and retain that person, You Are Doing It Wrong.