Tuesday, October 11, 2016

Resource Depletion Attacks: Commonly Performed by Security Industry

I make heavy use of the Linux 'at' and 'batch' facilities, which provide simple but very effective methods of increasing productivity via automation. Essentially, I want machines working to their thermal, IO, etc., limits as much as possible, so that I don't have to work to my thermal, IO, etc., limits. Naturally, I regard unused cores or threads, etc., as Bad Things.

At lunch today, there were three of four jobs in the queue, which is fairly typical. But none finished when expected, which is a bit atypical. The problem turned out to be, as it often is, a random security news service running core-sucking Javascript in an effort to get me to subscribe to their email service.

My bad, in some respects. I know better than to leave browser tabs open, leaving the marketroids an opportunity, but sometimes it just isn't possible to sort through a long list of tabs when you are trying to go to lunch. Sometimes you get burned by running batch jobs on an active workstation instead of shuffling them off to another machine.

On the other, having to even think about this an indictment of the security industry. Of which I am part, and that is FUBAR.

The definition of a resource depletion attack, according to Mitre's Common Attack Pattern Enumeration and Classification (an information-sharing resource worth using) is as follows.

Attack patterns within this category focus on the depletion of a resource to the point that the target's functionality is affected. Virtually any resource necessary for the target's operation can be targeted in this attack. The result of a successful resource depletion attack is usually the degrading or denial of one or more services offered by the target. Resources required will depend on the nature of the resource being depleted, the amount Pof resources the target has access to, and other mitigating circumstances such as the target's ability to shift load or acquire additional resources to deal with the depletion. The more protected the resource and the greater the quantity of it that must be consumed, the more skill the adversary will need to successfully execute attacks in this category.
Note that I had an opportunity to shift the load. But I am using 'at' and 'batch' casually, from the command-line. A better system would also be a much more formal system. I hadn't planned to review and retest real scheduling systems until toward the end of the year. If I had time, but only because I personally regard them as important and interesting. Perhaps I should bring that forward a bit, but it's a balance of annoyance versus available time: I have no current or horizon gig where robust scheduling systems play a central role. So, no immediate business driver. I'd like to flesh that topic out, but it would require another post to cover even the bare essentials.

Please excuse the digression, and it's time to return to the topic of the post. In the limit, almost no marketing can be regarded as ethical in the security industry, given that

  1. Your brain is a resource, and by far the most vital one
  2. Security news sources are commonly a FUD tsunami
  3. 2 is an attack on 1
  4. Current PCI-DSS requirements (5.1.2, 6.1) require news monitoring
It's often helpful to look at the worst case. PCI-DSS originated as an effort to avoid governmental regulation by self-regulation. In the earliest iteration, it was as basic as requiring a firewall -- some financial organizations did not, even when firewalls alone were a highly effective defense. It evolved into something as relatively sophisticated as saying, "Read the damned news, people." Because they routinely didn't. At all.

As usual, security people are trapped by the lowest common denominator. Here's how that might map to an all too common day-in-the-life of someone on a PCI team.
  1. You must read the news
  2. Security news sources are commonly a FUD tsunami
  3. FUD tsunamis are a DoS attack on your brain
  4. Your brain is your most vital resource
  5. FAIL
The largest offenders are the generic security news sources, and I very specifically include security product vendors. These are by far the most likely to burn your physical and mental cores with a FUD tsunami. Vendors of the software you are actually running? Sometimes they offer nothing at all (as in the early days of nginx, when it was basically just some random Russian supplier of an httpd, but still enthusiastically embraced due to efficiency) and not nearly enough offer anything that can be consumed by software. So we have to plod through this stuff manually.

Treasure the suppliers of the software you actually run who also provide up to date vulnerability data, particularly if it can be consumed by software. They free your brain, and that's important.

I assign news sources to tiers, according to a mechanism that works for me. Some random source that announces hacks of a PHP app run by 3 people? Not going to make my list, but it might make yours, if you have been trying to get management to get rid of that circa-1999 POS. Yes, that actually happens.

Develop something at least semi-formal

Know what you are running. A systems and software inventory can be surprisingly difficult to do. In larger enterprises, expensive high-bandwidth Internet connections can fall through the cracks. As can 7-figure database systems.

Know where your most important data live. That can be amazingly hard to do: there's always that one key worker. The one in that cube waaaay over there. With a spreadsheet that turns out to be vital.

There's an ancient dictum that suggests that you must know what you are protecting, and from whom. In many cases, forget it. It's no longer possible to predict who your adversary is. Whether your data is financial, or the secrets of military hardware, or 'just' passwords, (so, really all financial) the evolution of adversarial scanning technologies, exploit kits, etc., enables sophisticated yet entirely opportunistic attacks.

So, read the news. But have some sort of evaluation criteria, and put it under version control, because threats evolve. It has to be customized to your environment. Tier 1 will involve vendor alerts, because that involves patch planning, and (always short) testing windows. You might want a sub-schedule to break it into work shifts. Not all software comes from the same time zone, and a reaction time of less than 24 hours has repeatedly made all the difference. 

Assigning to Tiers 2 and 3 might involve how you evaluate sources as to reliability, frequency, frequency in combination with consequence, etc. Get used to thinking of it as a matrix, because it very much is. I have additional considerations, which include stability of source URLs, because I track reliability and frequency over time. You may or may not need to do that -- my requirements are peculiar.

Common Denominator: Vendor Bogosity is a DoS Attack

Security is hard. There is never enough time, there are never enough resources. 

A principle confounding factor is that much of what passes for news (and research, which is perhaps a greater problem) is delivered via what meets an essential definition of an attack. Background resentment of this state of affairs has long been pervasive within the security community; I'm certainly not the first to carp about it.

What is needed is not carping, but howls of resentment. It took that level of blow-back to get Microsoft to begin to change their ways, back in the day. When it became impossible to deploy a system before it was compromised, and CEOs complained, Microsoft found it impossible to ignore the noise. Much the same thing happened with vendors of proprietary Unix variants, though it wasn't as blatant. That completely changed the vulnerability disclosure game, though the vendors howled in protest. Or first began to howl -- they still do it.

It is only when we begin to call another generation of vendors out that another vast waste of scarce resources will end. This lossage is more difficult to quantify, so it's more difficult to raise a collective voice. Perhaps a start might be made by recognizing that vendor bogosity is a DoS attack, and telling them so. In, let's say extremely vigorous terms. Because history shows that nothing like a subtle approach is going to have an effect. 

Sometimes the best tool for the job really is a hammer.



















Wednesday, July 13, 2016

It's So Easy to Be Taken In

In other exciting news, Social Engineering attacks still work. Duh. But here's an illustrative example of it being done completely innocently. This is from another security worker-bee who was all on about why mobile and Bring Your Own Device (BYOD) was such a corporate threat.

Bogus Vatican Image
Bogus Vatican Image
That's a complicated topic, as evaluating risk always is, and is wide of the point that I want to make: the most effective possible social engineering attack comes from the innocent and mistakenly trusted. A very human failing, greatly magnified by transitive trust (friend-of-a-friend) issues. Which, make no mistake about it, we are all prone to. I might be particularly susceptible; because I am such an open, trusting sort of person.
Boris Karloff, The Mummy, 1932
Boris Karloff, The Mummy, 1932

The thing about that Saint Peter's Square image is that it was already in my database as bogus. Unlike the above Karloff image, which I only include because it was a cool old movie. Frivolity, thy name is Greg.

So how did I spot this unwitting social engineering attack? Chance. Striking images stick in the mind, and I happened to remember a source that really was in my DB: a Washington Post piece titled About those 2005 and 2013 photos of the crowds in St. Peter’s Square. There is no effective defense against social engineering attacks against a broad workforce, most of whom are just trying to live their lives.

If you do not assume that you will be hacked, you are Doing It Wrong. Worse, you are making that mistake in the face of a vast body of contrary evidence, and "Your security is important to us," PR is becoming widely ridiculed by both the security community, and more importantly, the public. Who are growing rather tired of the charade.

There are obvious things that can be done in beginning to address the problem. Most of them involve policy and standards, and the mechanisms for creating and enforcing them, or even (very doubtfully) convincing the workforce that their perfect performance is necessary. But these are, in the main, only available to larger organizations, where they work no better than they do at smaller scales.

As long as this sorry of affairs persists, the security industry will continue to fail, in an increasingly obvious manner.






Tuesday, April 19, 2016

Blackhole Crimeware Creator Gets 7 Years

That's a nice law enforcement win. 'Blackhole' is variously known as an exploit-kit or -pack or just straight-up crimeware, as it often came with regular updates,  or even support contracts. I have enough Blackhole references, dating back to 2012, in my database that it became boring to add them.

Brian Krebs reported this on 2016-04-14, at http://krebsonsecurity.com/2016/04/blackhole-exploit-kit-author-gets-8-years/. Note that there is a one year discrepancy between the URL and the the stated sentence.

I've already heard rumbles (possibly from other security worker-bees who hated plugging 'Blackhole' into a database for the nth time) that the sentence wasn't long enough. The line of thought was about scale: that Dmitry “Paunch” Fedotov, whom Krebs reports as having more than 1,000 customers, was earning $50,000 per month, and likely contributed to tens of millions of dollars stolen from small to mid-sized businesses over several years.

I can see the temptation there. Particularly the bit about 'tens of millions', and particularly the 'small to mid-sized businesses'. Organizations that fit that size description have been some of my favorite clients, are often most in need of the help, and I just generally feel better about having helped out an organization of that size, rather than some Fortune 500 behemoth. I would be amazed if I were to discover that that viewpoint is unusual, if could somehow survey the people down in the security trenches.

But was the penalty really light, at seven or eight years? Possibly not. First off, this was a Russian law enforcement win, and the sentence will be served in a penal colony. I don't know about you, but the idea of spending 7-8 years in a Russian penal colony does not take me to my Happy Place. I'm not going to address that further.

Suppose this was a United States thing? A US citizen, in US courts, with a potential for serving a sentence in a US prison?

Krebs refers to the likelihood of 'tens of millions of dollars stolen'. I completely agree. But let's compare this to the physical world. That necessarily involves bank heists, armored car robberies, etc., where people are likely to be injured or killed. Much drama, making it a natural for movies, such as Oceans n, or based on the Lufthansa heist, etc. Wikipedia has a list of large-value US robberies, several of which are in that tens of millions category. The most recent of $10+ million robberies date to 1997. The largest of which was the Dunbar Armored robbery, involving $27.9 million in 2016 dollars. The sentence? 24 years for mastermind Allen Pace, an insider. Under parole guidelines, he will have to serve 18 years, and five others will have to serve 8-17 years.

Bear in mind that this was a record robbery: it seems likely that it was politicized to at least some degree. The Loomis Fargo robbery ($25.5 million today) occurred the same year, yielded sentences from probation to 11 years. I haven't researched possible parole dates.

Differences in criminal justice systems make it difficult to judge whether Fedotov drew a sentence that was appropriate. But it seems to me to be broadly comparable, at minimum. That is a win for law enforcement. Penalties used to be no more than a slap on the wrist, as long as the crime was committed over the network. The extent of the damages didn't seem to matter.

There will be no immediate effect, no matter how much we might wish otherwise.

Sending signals has been less than effective in even the geopolitical realm, where huge numbers of government bureaucrats (State Department, etc.) are employed to keep it all sorted out, and react in something like real-time. Criminals will entirely miss this one, even if it should prove to be the start of a trend toward commensurate sentencing. It seems likely to be a generational thing.

I'm fine with that.

A couple of years ago I posted Law Always Lags, As It Should, "The universal claim seems to be that the law is behind the times. My take is that is better to have law that lags than law that leads. While lagging legal thought will certainly lead to injustice, it is less likely to lead to wholesale injustice. It is the lesser of two evils in an imperfect world."

Sunday, April 10, 2016

DitL: writing about files, of all things

Have a Day in the Life post, written on a Sunday night, after a lovely Spring afternoon spent with a text editor. Gack. That is just wrong.

Writing. 121 lines, 965 words, 5836 bytes, and all about writing files of all things. It really did take all afternoon, for not very much usable output. Some days just go like that. I mostly discovered what I should have been writing, which is a piece in three (four?) parts.

  1. How badly file creation is currently being done
  2. That interstitial bit between writing and reading, which leads to exploitable race conditions
  3. Reading is not so much a problem as parsing, which has been a gold mine of exploits over the years
  4. Possibly a lead-in bit, which I am attempting to dodge by posting this
An additional problem is how to present the material, as an introduction to the subject, without it being an off-putting wall of text. For instance, introducing hexdump to beginners, as well as a few programs in core-utils, all in text, turns out to be non-trivial. This stuff is a lot easier when you can just get in front of a whiteboard in scribble-yack-enjoy mode. 

Friday, February 19, 2016

NIST Defines Microservices, Application Containers and System Virtual Machines

Commentary::Architecture
Audience::All

I'm back on about microservices again, as I was in my
last post, Microservices and Linters. Because yesterday NIST released a draft of Definition of Microservices, Application Containers and System Virtual Machines, which you can see at http://csrc.nist.gov/publications/PubsDrafts.html#800-180.

I have problems with it. The public comments period runs through 2016-03-18, and a comment template is available at the link above. If you don't read many NIST drafts, note that line numbers are part of the document. While that is useful if you catch a typo, or take exception to one statement, it is less useful if you take exception to the entire thing, to the extent of labeling it FUBAR.

That is, of course, just one opinion. I could make a claim about not wanting to bias any opinions, but then I've just done that, to at least some extent. This is mostly about not even knowing where to start, regarding this specific document.

I still regard microservices as something that is not nearly as widely deployed as one might think, given the trade press hype coverage. But it might be nice if, for once, security considerations might be taken into account early in the game. Yes, that is a forlorn hope, but even long odds will come home, given enough time.

Bur, srsly.  "Microservices are built around capabilities as opposed to services..."? Wut? I am not at all convinced that we have a robust system, Linux or otherwise, based on the capabilities security model. Because

  • the capabilities security model is still a current research topic
  • implementations will contain flaws or a long time to come
  • all microservices architectures I have seen have been composed around the services (function) model

In this case, NIST seems to muddy the water through poor definitions. Particularly as it seems unlikely that wide awareness of capabilities-based security models, even as a research topic, exists at all within the wider software developer community. I've certainly seen little evidence for it, for whatever that might be worth.

Perhaps their usage of 'capabilities' was all about a possibly rapid evolution of APIs in containerized software, and not the capabilities security model. But rapid replacement of containerized software is about far more than API changes. Rather importantly, they are also about bug fixes. A percentage of which have always been exploitable, and as these are inherently remotely-accessible, would tend to have severity numbers reflecting that broad attack surface.

A lack of clarity over what, exactly, 'capabilities' might mean, in a NIST publication with 'Definition' in the title is, in and of itself, a problem.

Of course this is just anecdotal, from some random security worker. Not even remotely real evidence. So please ask around in your own organization, form your own conclusions, and comment on 800-180, if at all, as you see fit.

UUID: 92ce460e-58a6-4fd1-b3ed-d44f2d9c0183

Thursday, January 28, 2016

Microservices and Linters

Commentary::Coding
Audience::Entry

Microservices are all the rage at the moment, for good reason. I am of course interested in the security aspects, and I am also on record as loving me some Python. Why, in detail, is probably something I should write up in a future post. For now, I'm just going to mention an intersection between the two.

In Chapter 9 (Security) of Building Microservices (Sam Newman, O'Reilly, 2015) we have exactly one mention of static analysis, under Baking Security In. Please don't misunderstand me – I found Building Microservices a worthwhile read. It is, however, a rather broad overview, and does have an unfortunate tendency to follow fashion. That last is probably not avoidable: the title has to sell, after all. And there is no possible way that all languages, strategies, etc., might be mentioned. So, no points off. Overall, author Sam Newman has done a nice job with this title.

That said, I still have a problem with static testing getting only one mention. Much has been written about developers needing to raise their game. It really is ridiculous that most injection attacks (SQL, LDAP, etc), for example, can exist in 2016. A lot of this is justified, but there are also some bits about QA, where that luxury still exists, that do not seem to get a lot of mention. Test-driven development can only take you so far, and an external QA group is a hugely useful defense against groupthink, deadline pressure, and other sources of problems in the delivery of reliable code.

Real developers could probably provide me with a lengthy list, and are invited to do so. You could further educate me with an ordered list – one can never have too much data.

Circling Back to Python

Back in June Andrew Collette (creator of much Python HDF5 code) wrote an excellent piece:
My Experience Using Static Analysis With Python. He was on Travis CI, and recommended both pyflakes (at minimum) and PyLint.

As it happens, we ended up using PyLint, which found about 100 legitimate issues with the code base, ranging from missing docstrings to calls to functions with the wrong number of arguments.
The takeaway is to use at least some sort of linter. Fine. That's doable, in either a full-on Continous Integration environment, or just using git hooks in a personal repo. Low marginal cost, better code. What's not to like?

Distributed Computing Is Inherently Complicated

Modern computing environments may consist of a single machine, comprised of multiple threads on multiple cores, many nodes in a cluster, or extremely parallel computing via GPU. In some cases (RDMA comes to mind), basic security mechanisms provided by Unix-like kernels are already being bypassed.

The need for reliable user-land code is never going to decrease. If even a minor improvement can be had by using something as widely known as a linter, yet that is not a universally accepted practice, then we are collectively Doing It Wrong.

UUID: d4b72b13-5dd1-46d1-913a-9dc470e0b6d7