Wednesday, July 9, 2014

Drama Indicates FAIL

This is a post that I just tossing out there, so that I can refer to it later.

A couple of my posts have led to contacts via other channels which were all indignant, shouty, or otherwise dramatic. I mostly regard these as a source of entertainment; somewhat like most people probably regard a film based on a comic book.

This wouldn't be true in all cases. If I knew you, or you represented a current or previous client or employer, I would give the matter careful consideration. But random Internet drama, particularly from someone who is not willing to so much as respond to the post that set them off, on that post? Please.

What means is that I am too easy to contact. My only possible benefit is a moment of amusement, which may or may not be adequate compensation for that moment of my time.

My favorite continues to be “My {Head|Brain} Literally Exploded!”

Really? Your {head|brain} literally, not figuratively, exploded? Exactly how, then, were you able to communicate that to me? The implications of communications post-{head|brain}-explosion are large.

The (possibly) up side
  • it would constitute proof of life after death
  • it would explain a lot about politics

The down side
  • there did not seem to be a difference in your communication (or cognitive) abilities pre- or post-{head|brain}-explosion

I can't help but think that that last bit is unlikely to support whatever argument you were mounting.

My point is that I regard drama as symptomatic of FAIL. Even when it doesn't involve some random out-of-band ping, the reasons I see it seem to be sortable into:

  1. silly Internet Drama from people who really should know better
  2. exuberant marketing
  3. something really has gone very wrong

1 and 2 are often interchangeable, and only 3 is important.

I am of course prepared to change my position, given credible evidence that I am wrong. Much drama, associated with everything being demonstrably All Better Now, would do nicely.







Security FUBARs with Long Histories

Back in the hazy mists of time, “Smart Documents” were all the rage in the IT trade press. You could embed a button (or whatever) in a document, and It Would Do Stuff. At that point, data became mingled with executable code on a widespread basis, and the world changed. The consequences included everything from exploits against Adobe products (including an attack based on a capability to interact with wire-frame graphics, which I understand was quite popular with architects) to the huge flight of problems with Microsoft's Visual Basic for Applications (VBA) vulnerabilities.

Neither of these has ever really gone away. Vulnerabilities in Adobe products continue to be amongst the most widely exploits in the history of computing. Shipping Shockwave players with very old, but still unpatched vulnerabilities (May, 2014), a data breach affecting 38 million users (October, 2013), etc. supply all the evidence needed to support either of two conclusions: that they simply do not care, or that they are wonderfully incompetent.

The security train-wreck that Adobe has always been is truly fascinating, and I can't help but think that there is a Masters thesis or two in there.

Microsoft, on the other hand, has cleaned up their act enormously. They more or less had to; it was once highly unlikely that you could deploy a fully patched Microsoft server before it was compromised. This was one cause of the rise of Linux, and made everyone from the IT department, to C-suite executives, to shareholders greatly unhappy; their largest customers were howling. There is probably a thesis or two in there as well, and I suspect that any such thesis would point up the historic importance of howling.

But leaving the proprietary, closed-source world, it is important to recognize another widespread problem: open source (or other Unix-y OS) fanbois-isms. Inherently secure, yada yada yada whatever. We have all heard it. For the record, I do consider Linux (and the BSD Unix variants) to be stronger operating systems, though for reasons that would probably surprise many people that don't know me rather well.

CVE-2014-0247 is something that some might consider unimportant, compared to, say the train-wreck that is OpenSSL. It is not.


Vulnerability Summary for CVE-2014-0247
Original release date: 07/03/2014
Last revised: 07/07/2014
Source: US-CERT/NIST
Overview

LibreOffice 4.2.4 executes unspecified VBA macros automatically, which has unspecified impact and attack vectors, possibly related to doc/docmacromode.cxx.
Impact
CVSS Severity (version 2.0):
CVSS v2 Base Score: 10.0 (HIGH) (AV:N/AC:L/Au:N/C:C/I:C/A:C)
Impact Subscore: 10.0
Exploitability Subscore: 10.0
CVSS Version 2 Metrics:
Access Vector: Network exploitable
Access Complexity: Low
Authentication: Not required to exploit
Impact Type: Allows unauthorized disclosure of information; Allows unauthorized modification; Allows disruption of service

So, it's quite serious, in and of itself.

CVE-2014-0247 exists because the issues with VBA macros have been directly transferred into an Open Source environment, including the old security flaw of automatically executing VBA macros. It happened because that age-old Intelligent Document press was not hype. It represented a competitive advantage for adopters over non-adopters, and business advantage wins. In a business context, I am not aware of any evidence of a business advantage ever being refused due to security concerns. One could certainly argue that this is simply because had the advantage been refused, the product or technique would never have become widespread. However, my personal experience, and that of others that I trust, while admittedly anecdotal, is that it happens rarely, at best.

Furthermore, weighting return on investment above all else (and basing that calculation on flawed data) seems to apply to even the vaunted NSA, despite their largely hidden, but undoubtedly enormous, budget. The Snowden leaks would not have been possible if NSA had deployed technologies that they directly funded. Consider the case of root powers being controllable by SELinux, whereas much of the leakage was seemingly made possible by attacks against flawed Microsoft SharePoint configurations.

It seems clear to me that the majority of people working in anything that touches security would be well advised to be thinking about metrics, and assigning better numbers during risk analysis than is commonly done today. Assuming risk analysis is done at all; it is clearly not a sufficiently widespread practice. Without that, ROI calculations will remain hopelessly in error.

Furthermore, it seems likely that the best prospects for improvement in the current abysmal state of affairs would occur if this were done by the consumers of IT. Only then can sufficient howling at the suppliers (whether open- or closed-source), backed by real data, occur.


Wednesday, July 2, 2014

Better Security Data From a Lower Noise Floor

Back in the hazy mists of time, when dinosaurs ruled the earth, I used to warn clients that major holidays, news events, etc., would increase attacks. It was actually in my calendar to do it at the start of the winter holiday season. That is no longer necessary; organizations possessed of competent staff expect this.

I rather expect those same organizations have evolved their strategies well beyond what I advised in those ancient days. But let us dip into history for a bit. Once upon a time, the American 4th of July holiday was a predictable yearly low point in Web traffic. I do not know if this is still the case, because I no longer track these data; only the least sophisticated organizations do not already have these data already in hand.

Does that mean that I have changed my opinion from almost a year ago, when I wrote that We still fail at log analysis?

Not really, at least in a security context. My experience continues to be that data is far more likely to be retained and (far more importantly) analyzed on short time scales, if it is related to sales, the efficiency of marketing campaigns, correlations to external events that may indicate sentiment shifts, and related matters.

It continues to be all about budgets and perceptions, and the need to mount a business case in support of arguments for security expenditures. This is in no way surprising.

A Bit of Speculation


Assume, for the moment, the following points
- A low-traffic period may be about to occur for many popular Web sites
- All adversaries are not sophisticated enough to proportionally scale back efforts related to network characterization, and related techniques, even if target (you) traffic data are available to them.

It follows that hostile acts would then provide a clearer signal against the noise floor of legitimate traffic. The irony bit is set, in this case: your adversary is not a Magical Being (Black Swans, APT, and other security hype aside) , and is roughly as likely to fail at log analysis as you are. I would speculate that as adversarial sophistication grows, and they more resemble a traditional IT-like organization, the more likely they are to themselves fail in log analysis.

Potential Exploits


'Exploit' has some obvious negative connotations amongst members of the security community, mainly regarding 0-day vulnerabilities, buffer overflows, etc. Sometimes it seems to me that this term is forbidden (much like the term 'hacker') amongst the security community worker.

Personally, I like it. It implies an agressive, forward-leaning security posture. I do not favor passive defense, because the record of that approach seems both clear and unfavorable. That, however, is a matter for another post.

1. You are mostly likely to have extensive data on traffic highs and lows.
2. All else being equal, favor short-term enhanced logging when traffic is low. It's likely to yield valuable information related to common threats, at minimal infrastructure load.
3. Never assume that the data gathered is the entire story. Always allow for the possibility that your adversary is more clever than you, or that you have otherwise underestimated your adversary. E.g. your adversary may have scaled back network traffic to match your expectations.
3. Characterizing the effort required at the 'most-easy' point is a valuable data point when building business cases.
4. When everything goes pear-shaped, be doubly sure that you characterize the response effort. This is a tremendously hard problem, but the value is proportional to the effort. If you get it right, any future risk analyst (who might happen to be sane) will thank $DEITY for your efforts.









Monday, June 23, 2014

Mapping Worker Experiences to Security Training and Policy

Earlier today, I visited my local semi-rural convenience store. There were three Sheriff's vehicles in the parking lot, which isn't particularly unusual. There's a shooting range nearby, and I usually just assume that they are using it to stay current in firearms training. Pistols are difficult to shoot well, without a frequency of practice which is far beyond any level of effort I could sustain.

Walking to the door, I heard a “f*****g cops” remark. Perhaps that was meant as an insult—I probably looked like law enforcement, what with the way I was dressed, having really short hair, etc. Whatever. But it was an unusual thing to hear out here. I glanced at the person who made the comment, formed my own conclusions, and went inside.

I noted a few guys in there wearing various mismatched bits of paramilitary gear (camouflage, black “Sheriff” t-shirts, various boot styles, a boony hat, firearms in evidence, etc.). I didn't find any of that disturbing, because I expected it.

I did my business, and left. I noted on exit that the person who made the comment (and the vehicle) was gone, and that I had not seen that person inside the store. Off to the house, and nothing weird to report.

Interlude


Every statement above is completely factual, and forensically useful. Not least because it provides a time-series of events, which is always important. Your mileage will vary with the effectiveness with which you teach the value of accurate reporting.

Future Posts


I regard, in general terms, local and regional government as
  1. occasionally annoying (why is doing this useful thing so difficult)
  2. occasionally useful (providing some service I use occasionally, periodically, or seasonally)
  3. enabling (maintaining existing infrastructure, and building new infrastructure)
  4. emergency (first-responder or disaster-response services)
Given the circumstances, a first-responder post pretty much wrote itself. But that maps to incident-response, and avoiding that in the first place is far more important. Also more elegant, more user-friendly, and far more powerful.





Wednesday, June 18, 2014

A Fractional Day in the Life


This morning (0630 or so), I was out letting the hose run onto the base of a birch tree that has an enormous climbing rose clambering through it. I'm on a well, so water is essentially free, and I often do things like this while I'm thinking about the problem de jour.

This was not obvious to the neighbor who walked by, and politely asked what I was up to. A hose doesn't make much noise, after all. From that neighbor's perspective, I was just leaning against the side of Old Scabrous, my 1990 Jeep. The conversation ran like this.

"Whatcha doing?"

"Working."

"Propping up Old Scabrous?"

"Thinking about entropy."

"WTF is that?"

<brief semi-coherent explanation, while trying to save state, and not lose the better part of an hour of work>

As a reward, I got that exasperated "I live next to a twisted alien mutant from the Forbidden Zone" look. It's all cool; my neighbor knows I am harmless, even if apparently somewhat deranged, and is a very nice person.

So, there you go. A Day in the Life. Well, a snapshot of 6:30 AM life, anyway. And yes, I had been thinking about this since roughly 5:30. I keep weird hours.

Monday, June 16, 2014

Why HR Cannot Hire Good Security People

There are a lot of posts that are more or less begging to be written; much of the behind-the-scenes things, such as consolidated notes, bookmarks, text fragments, etc., is done, and the post would be at least somewhat timely. In some cases, that might involve a flight of posts. Languages worth learning, from a security worker perspective, comes to mind. Embedded Lua interpreters, for instance, turn up frequently. Another Java post is definitely needed.

The list of potential language posts goes on for quite a bit, especially when you consider how broad a term 'security worker' is. It is entirely possible to devote an entire career to statistics, yet fall within plausible definitions of a 'security worker': consider risk analysis, breaking data annonymization, etc. R, various Python-based tools, etc. (all related to technical computing), would then become quite important.

I have attempted to get a grip on what a 'security worker' might be, hence what the qualifications might be, for several years. On at least one occasion, it was in response to an HR request for specific instructions regarding hiring a counterpart in a foreign country. This is a hard problem; to take a random example, the Law of Large Numbers is important in surprisingly many security fields, but it is obviously nowhere near being a useful universal selector.

What else goes wrong in HR, from an applicant's perspective? My top three contenders, on an on-going basis are

  • Requiring five years experience with something that has only existed for two years
  • Requiring experience with something which is completely irrelevant
  • Being driven by marketing fashion, not fundamentals

HR doesn't operate in a vacuum. Someone (likely an over-worked developer, sysadmin, or entry-level supervisor of either) provided those bogus requirements. The knock-on effects are that

  • The best candidates will likely never make it to an interview
  • If the person who defined bogus requirements is part of the interview team, defensiveness is likely to fail the best remaining candidates

The best candidates have now been weeded out. HR often takes the heat, through no fault of their own, while much Internet drama is conducted in the various technical cognoscenti fora. The evil HR director Catbert, made famous in the Dilbert comics, exists. I have run into a few, over the years. However, Catbert is the exception, not the rule.

That seemingly throw-away point above? "Being driven by marketing fashion, not fundamentals?" That is a whole topic in itself. It may be the greatest challenge facing the security industry today.










Thursday, June 12, 2014

Linux Shells Just Keep Bugging Me

A couple of months ago, Dan Farmer (who is definitely someone that technically-oriented security people should pay attention to) put up a one-liner to generate 128 bits of pseudo-randomness, via a shell.

Unfortunately, single quotes are showing up as back-ticks in that post, so cut-and-paste 'programmers' had a bit of debugging to do. Hopefully Blogger will do a bit better (update: yes it does):

dd if=/dev/urandom bs=16 count=1 2>/dev/null| hexdump |awk '{$1=""; printf("%s", $0)}' | sed 's/ //g'

My first reaction was horror; sed and awk are both Turing-complete programming languages, and a bit on the large size. However, large systems written in a shell will likely use both, in which case they will likely be resident in memory. Unless your system is swapping madly, in which case you have other problems. So, even running that code in a tight loop may not have as much effect as one might think.

There are things we might do to swap in lighter-weight utilities, and get more readable code as well:

dd if=/dev/urandom bs=16 count=1 2>/dev/null | hexdump | head -n 1 | cut -d ' ' -f 2- | tr -d ' '

In shells, each command executes in a sub-shell, and that is an expense. My version costs us two more sub-shells. So, is it a win? I have no clue, and getting a clue is not an easy thing to do in shells. Variants of this code fail to give valid results using either the bash builtin 'time', or /usr/bin/time. Putting it into a script, and timing that, does not really help.

It is complicated. Have a look at https://stackoverflow.com/questions/5014823/how-to-profile-a-bash-shell-script. Note that there is no mention of taking an average of many tests, which is fundamental.

Shell code is pretty much the last thing you should use to write anything, if you have security in mind. I've had to do it because managers demanded it. Probably because they read an article about POSIX, from ten or fifteen years ago. But the security argument has seldom worked in the past, so let's mount that profiling argument. It is entirely valid.

Shells are tremendously useful for things like the initial exploration of log files, and setting up execution environments*. Which means that knowing the native shell is important. And there are things I still need to look at; I've just installed the dash shell. But so far, my take is that if you are using a famously slow and error-prone tool to write production (vice exploratory) code, and have no good means of profiling that code, You Are Doing It Wrong.

file $(which firefox)
/usr/bin/firefox: Bourne-Again shell script, ASCII text executable