Thursday, July 31, 2014

A Note About Policy

This is one of those posts that I have to throw out there in order to link back to it later, so people can track how it evolves, and point out mistakes. I am fine with that, BTW. That's how we learn, and we certainly need to do a lot of that. This is version 1.0. It will evolve. I should probably put it on github or something.

What I had been meaning to write about was linking policy. This is a branch point--what is really annoying me most recently is disclosure. More on that shortly.


Some sites just don't get a link, period, for reasons that seem good to me. Some (not all) of the features of those sites include, in no particular order:

  • Excessive politics, propaganda, or marketing. This includes propagation of information that is widely known to be disingenuous, is composed of marketing-speak, and similar bullshit. I don't have time for that, and I am going to go out on a limb here, and assume that you do either.
  • Rapid URL rot. Sites that can't create stable links usually have other problems as well.
  • Sites that seem to promote intentionally adversarial discussions. Because there is enough heat and noise. If advertising has to be the chosen business model model of the Internet, there really should be a better mechanism for selecting allowed ads. Notice how many sites trash some vendor product, but the page is splattered with ads from that vendor.


Some vendors have a long history of security fubars. Many vendors (even vendors that are all the rage, these days) talk about Responsible Disclosure. I have problems with that.
  • Who is a vendor? For-profit, non-profit, the admin of some random listserv?
  • Vendors, by whatever definition, tend to take the path of least effort. Its human nature, but does not serve the end user particularly well. 
  • 'Responsible disclosure', as terminology, skews the discussion in the vendors favor. It gives them an opportunity, which they have historically taken advantage of, to stifle publication of problems. The argument is that it would put users at risk. The obverse is that users are at risk anyway--they just don't know it.
  • Vendors have long delivered software (and firmware) which does not pass the most rudimentary sanity check vis-a-vis security. Those who report problems, and are still sometimes attacked for it, are justified in questioning how patently irresponsible vendors can claim a lack of responsibility on the part of those who form what is essentially a distributed QA systems. QA which the vendor should have done.


  • The costs of that distributed QA system.
  • Vendor is probably a bad term--what about providers of free (as in beer) software or services?

Friday, July 25, 2014

You Can Order Pre-order Kreb's Spam Nation Now

It won't be out until November. For me, that is just in time for what is probably a good read during the start of one of Oregon's famously dreary winters. The krebsonsecurity blog post has more information, and the Amazon pre-order page has an editorial review with an excerpt from Chapter 1. Back in February, Bruce Schneier wrote a Krebs post, with a link to a NY Times profile. So it isn't just me.

Brian Krebs is perhaps uniquely qualified to write such a book, as he has long been intimately involved with the field, has a history of painstaking research, and has undoubted talents as a writer.

I also have an immediate selfish interest in hoping that this book is book is as good (and successful) as I hope it to be. Several of his posts are go-to answers to important questions. The Scrap Value of a Hacked PC, Revisited is good example. I don't have to write a careful response to questions of this nature, because Krebs has been there and done that--far better than I could. I can just send a link.

Here's the thing. Pre-orders are important in establishing the size of a print run. Writing any sort of book is an immense amount of work, which can be essentially wasted in terms of ROI if the books aren't available. Ebooks may mitigate this to some extent, but from what I've read, sales of physical books still matters a lot. 

So, ROI. If his track record is any guide, the Investment has been very large. I'd like his Return to be large as well. You can think of this as a non-immediate selfish goal if, you like. Brian Krebs is a very effective Good Guy, and keeping him in the fight is a useful thing to do. 

Wednesday, July 9, 2014

Drama Indicates FAIL

This is a post that I just tossing out there, so that I can refer to it later.

A couple of my posts have led to contacts via other channels which were all indignant, shouty, or otherwise dramatic. I mostly regard these as a source of entertainment; somewhat like most people probably regard a film based on a comic book.

This wouldn't be true in all cases. If I knew you, or you represented a current or previous client or employer, I would give the matter careful consideration. But random Internet drama, particularly from someone who is not willing to so much as respond to the post that set them off, on that post? Please.

What means is that I am too easy to contact. My only possible benefit is a moment of amusement, which may or may not be adequate compensation for that moment of my time.

My favorite continues to be “My {Head|Brain} Literally Exploded!”

Really? Your {head|brain} literally, not figuratively, exploded? Exactly how, then, were you able to communicate that to me? The implications of communications post-{head|brain}-explosion are large.

The (possibly) up side
  • it would constitute proof of life after death
  • it would explain a lot about politics

The down side
  • there did not seem to be a difference in your communication (or cognitive) abilities pre- or post-{head|brain}-explosion

I can't help but think that that last bit is unlikely to support whatever argument you were mounting.

My point is that I regard drama as symptomatic of FAIL. Even when it doesn't involve some random out-of-band ping, the reasons I see it seem to be sortable into:

  1. silly Internet Drama from people who really should know better
  2. exuberant marketing
  3. something really has gone very wrong

1 and 2 are often interchangeable, and only 3 is important.

I am of course prepared to change my position, given credible evidence that I am wrong. Much drama, associated with everything being demonstrably All Better Now, would do nicely.

Security FUBARs with Long Histories

Back in the hazy mists of time, “Smart Documents” were all the rage in the IT trade press. You could embed a button (or whatever) in a document, and It Would Do Stuff. At that point, data became mingled with executable code on a widespread basis, and the world changed. The consequences included everything from exploits against Adobe products (including an attack based on a capability to interact with wire-frame graphics, which I understand was quite popular with architects) to the huge flight of problems with Microsoft's Visual Basic for Applications (VBA) vulnerabilities.

Neither of these has ever really gone away. Vulnerabilities in Adobe products continue to be amongst the most widely exploits in the history of computing. Shipping Shockwave players with very old, but still unpatched vulnerabilities (May, 2014), a data breach affecting 38 million users (October, 2013), etc. supply all the evidence needed to support either of two conclusions: that they simply do not care, or that they are wonderfully incompetent.

The security train-wreck that Adobe has always been is truly fascinating, and I can't help but think that there is a Masters thesis or two in there.

Microsoft, on the other hand, has cleaned up their act enormously. They more or less had to; it was once highly unlikely that you could deploy a fully patched Microsoft server before it was compromised. This was one cause of the rise of Linux, and made everyone from the IT department, to C-suite executives, to shareholders greatly unhappy; their largest customers were howling. There is probably a thesis or two in there as well, and I suspect that any such thesis would point up the historic importance of howling.

But leaving the proprietary, closed-source world, it is important to recognize another widespread problem: open source (or other Unix-y OS) fanbois-isms. Inherently secure, yada yada yada whatever. We have all heard it. For the record, I do consider Linux (and the BSD Unix variants) to be stronger operating systems, though for reasons that would probably surprise many people that don't know me rather well.

CVE-2014-0247 is something that some might consider unimportant, compared to, say the train-wreck that is OpenSSL. It is not.

Vulnerability Summary for CVE-2014-0247
Original release date: 07/03/2014
Last revised: 07/07/2014

LibreOffice 4.2.4 executes unspecified VBA macros automatically, which has unspecified impact and attack vectors, possibly related to doc/docmacromode.cxx.
CVSS Severity (version 2.0):
CVSS v2 Base Score: 10.0 (HIGH) (AV:N/AC:L/Au:N/C:C/I:C/A:C)
Impact Subscore: 10.0
Exploitability Subscore: 10.0
CVSS Version 2 Metrics:
Access Vector: Network exploitable
Access Complexity: Low
Authentication: Not required to exploit
Impact Type: Allows unauthorized disclosure of information; Allows unauthorized modification; Allows disruption of service

So, it's quite serious, in and of itself.

CVE-2014-0247 exists because the issues with VBA macros have been directly transferred into an Open Source environment, including the old security flaw of automatically executing VBA macros. It happened because that age-old Intelligent Document press was not hype. It represented a competitive advantage for adopters over non-adopters, and business advantage wins. In a business context, I am not aware of any evidence of a business advantage ever being refused due to security concerns. One could certainly argue that this is simply because had the advantage been refused, the product or technique would never have become widespread. However, my personal experience, and that of others that I trust, while admittedly anecdotal, is that it happens rarely, at best.

Furthermore, weighting return on investment above all else (and basing that calculation on flawed data) seems to apply to even the vaunted NSA, despite their largely hidden, but undoubtedly enormous, budget. The Snowden leaks would not have been possible if NSA had deployed technologies that they directly funded. Consider the case of root powers being controllable by SELinux, whereas much of the leakage was seemingly made possible by attacks against flawed Microsoft SharePoint configurations.

It seems clear to me that the majority of people working in anything that touches security would be well advised to be thinking about metrics, and assigning better numbers during risk analysis than is commonly done today. Assuming risk analysis is done at all; it is clearly not a sufficiently widespread practice. Without that, ROI calculations will remain hopelessly in error.

Furthermore, it seems likely that the best prospects for improvement in the current abysmal state of affairs would occur if this were done by the consumers of IT. Only then can sufficient howling at the suppliers (whether open- or closed-source), backed by real data, occur.

Wednesday, July 2, 2014

Better Security Data From a Lower Noise Floor

Back in the hazy mists of time, when dinosaurs ruled the earth, I used to warn clients that major holidays, news events, etc., would increase attacks. It was actually in my calendar to do it at the start of the winter holiday season. That is no longer necessary; organizations possessed of competent staff expect this.

I rather expect those same organizations have evolved their strategies well beyond what I advised in those ancient days. But let us dip into history for a bit. Once upon a time, the American 4th of July holiday was a predictable yearly low point in Web traffic. I do not know if this is still the case, because I no longer track these data; only the least sophisticated organizations do not already have these data already in hand.

Does that mean that I have changed my opinion from almost a year ago, when I wrote that We still fail at log analysis?

Not really, at least in a security context. My experience continues to be that data is far more likely to be retained and (far more importantly) analyzed on short time scales, if it is related to sales, the efficiency of marketing campaigns, correlations to external events that may indicate sentiment shifts, and related matters.

It continues to be all about budgets and perceptions, and the need to mount a business case in support of arguments for security expenditures. This is in no way surprising.

A Bit of Speculation

Assume, for the moment, the following points
- A low-traffic period may be about to occur for many popular Web sites
- All adversaries are not sophisticated enough to proportionally scale back efforts related to network characterization, and related techniques, even if target (you) traffic data are available to them.

It follows that hostile acts would then provide a clearer signal against the noise floor of legitimate traffic. The irony bit is set, in this case: your adversary is not a Magical Being (Black Swans, APT, and other security hype aside) , and is roughly as likely to fail at log analysis as you are. I would speculate that as adversarial sophistication grows, and they more resemble a traditional IT-like organization, the more likely they are to themselves fail in log analysis.

Potential Exploits

'Exploit' has some obvious negative connotations amongst members of the security community, mainly regarding 0-day vulnerabilities, buffer overflows, etc. Sometimes it seems to me that this term is forbidden (much like the term 'hacker') amongst the security community worker.

Personally, I like it. It implies an agressive, forward-leaning security posture. I do not favor passive defense, because the record of that approach seems both clear and unfavorable. That, however, is a matter for another post.

1. You are mostly likely to have extensive data on traffic highs and lows.
2. All else being equal, favor short-term enhanced logging when traffic is low. It's likely to yield valuable information related to common threats, at minimal infrastructure load.
3. Never assume that the data gathered is the entire story. Always allow for the possibility that your adversary is more clever than you, or that you have otherwise underestimated your adversary. E.g. your adversary may have scaled back network traffic to match your expectations.
3. Characterizing the effort required at the 'most-easy' point is a valuable data point when building business cases.
4. When everything goes pear-shaped, be doubly sure that you characterize the response effort. This is a tremendously hard problem, but the value is proportional to the effort. If you get it right, any future risk analyst (who might happen to be sane) will thank $DEITY for your efforts.