Tuesday, October 11, 2016

Resource Depletion Attacks: Commonly Performed by Security Industry

I make heavy use of the Linux 'at' and 'batch' facilities, which provide simple but very effective methods of increasing productivity via automation. Essentially, I want machines working to their thermal, IO, etc., limits as much as possible, so that I don't have to work to my thermal, IO, etc., limits. Naturally, I regard unused cores or threads, etc., as Bad Things.

At lunch today, there were three of four jobs in the queue, which is fairly typical. But none finished when expected, which is a bit atypical. The problem turned out to be, as it often is, a random security news service running core-sucking Javascript in an effort to get me to subscribe to their email service.

My bad, in some respects. I know better than to leave browser tabs open, leaving the marketroids an opportunity, but sometimes it just isn't possible to sort through a long list of tabs when you are trying to go to lunch. Sometimes you get burned by running batch jobs on an active workstation instead of shuffling them off to another machine.

On the other, having to even think about this an indictment of the security industry. Of which I am part, and that is FUBAR.

The definition of a resource depletion attack, according to Mitre's Common Attack Pattern Enumeration and Classification (an information-sharing resource worth using) is as follows.

Attack patterns within this category focus on the depletion of a resource to the point that the target's functionality is affected. Virtually any resource necessary for the target's operation can be targeted in this attack. The result of a successful resource depletion attack is usually the degrading or denial of one or more services offered by the target. Resources required will depend on the nature of the resource being depleted, the amount Pof resources the target has access to, and other mitigating circumstances such as the target's ability to shift load or acquire additional resources to deal with the depletion. The more protected the resource and the greater the quantity of it that must be consumed, the more skill the adversary will need to successfully execute attacks in this category.
Note that I had an opportunity to shift the load. But I am using 'at' and 'batch' casually, from the command-line. A better system would also be a much more formal system. I hadn't planned to review and retest real scheduling systems until toward the end of the year. If I had time, but only because I personally regard them as important and interesting. Perhaps I should bring that forward a bit, but it's a balance of annoyance versus available time: I have no current or horizon gig where robust scheduling systems play a central role. So, no immediate business driver. I'd like to flesh that topic out, but it would require another post to cover even the bare essentials.

Please excuse the digression, and it's time to return to the topic of the post. In the limit, almost no marketing can be regarded as ethical in the security industry, given that

  1. Your brain is a resource, and by far the most vital one
  2. Security news sources are commonly a FUD tsunami
  3. 2 is an attack on 1
  4. Current PCI-DSS requirements (sections 5.1.2, 6.1) require news monitoring
It's often helpful to look at the worst case. PCI-DSS originated as an effort to avoid governmental regulation by self-regulation. In the earliest iteration, it was as basic as requiring a firewall -- some financial organizations did not, even when firewalls alone were a highly effective defense. It evolved into something as relatively sophisticated as saying, "Read the damned news, people." Because they routinely didn't. At all.

As usual, security people are trapped by the lowest common denominator. Here's how that might map to an all too common day-in-the-life of someone on a PCI team.
  1. You must read the news
  2. Security news sources are commonly a FUD tsunami
  3. FUD tsunamis are a DoS attack on your brain
  4. Your brain is your most vital resource
  5. FAIL
The largest offenders are the generic security news sources, and I very specifically include security product vendors. These are by far the most likely to burn your physical and mental cores with a FUD tsunami. Vendors of the software you are actually running? Sometimes they offer nothing at all (as in the early days of nginx, when it was basically just some random Russian supplier of an httpd, but still enthusiastically embraced due to efficiency) and not nearly enough offer anything that can be consumed by software. So we have to plod through this stuff manually.

Treasure the suppliers of the software you actually run who also provide up to date vulnerability data, particularly if it can be consumed by software. They free your brain, and that's important.

I assign news sources to tiers, according to a mechanism that works for me. Some random source that announces hacks of a PHP app run by 3 people? Not going to make my list, but it might make yours, if you have been trying to get management to get rid of that circa-1999 POS. Yes, that actually happens.

Develop something at least semi-formal

Know what you are running. A systems and software inventory can be surprisingly difficult to do. In larger enterprises, expensive high-bandwidth Internet connections can fall through the cracks. As can 7-figure database systems.

Know where your most important data live. That can be amazingly hard to do: there's always that one key worker. The one in that cube waaaay over there. With a spreadsheet that turns out to be vital.

There's an ancient dictum that suggests that you must know what you are protecting, and from whom. In many cases, forget it. It's no longer possible to predict who your adversary is. Whether your data is financial, or the secrets of military hardware, or 'just' passwords, (so, really all financial) the evolution of adversarial scanning technologies, exploit kits, etc., enables sophisticated yet entirely opportunistic attacks.

So, read the news. But have some sort of evaluation criteria, and put it under version control, because threats evolve. It has to be customized to your environment. Tier 1 will involve vendor alerts, because that involves patch planning, and (always short) testing windows. You might want a sub-schedule to break it into work shifts. Not all software comes from the same time zone, and a reaction time of less than 24 hours has repeatedly made all the difference. 

Assigning to Tiers 2 and 3 might involve how you evaluate sources as to reliability, frequency, frequency in combination with consequence, etc. Get used to thinking of it as a matrix, because it very much is. I have additional considerations, which include stability of source URLs, because I track reliability and frequency over time. You may or may not need to do that -- my requirements are peculiar.

Common Denominator: Vendor Bogosity is a DoS Attack

Security is hard. There is never enough time, there are never enough resources. 

A principle confounding factor is that much of what passes for news (and research, which is perhaps a greater problem) is delivered via what meets an essential definition of an attack. Background resentment of this state of affairs has long been pervasive within the security community; I'm certainly not the first to carp about it.

What is needed is not carping, but howls of resentment. It took that level of blow-back to get Microsoft to begin to change their ways, back in the day. When it became impossible to deploy a system before it was compromised, and CEOs complained, Microsoft found it impossible to ignore the noise. Much the same thing happened with vendors of proprietary Unix variants, though it wasn't as blatant. That completely changed the vulnerability disclosure game, though the vendors howled in protest. Or first began to howl -- they still do it.

It is only when we begin to call another generation of vendors out that another vast waste of scarce resources will end. This lossage is more difficult to quantify, so it's more difficult to raise a collective voice. Perhaps a start might be made by recognizing that vendor bogosity is a DoS attack, and telling them so. In, let's say extremely vigorous terms. Because history shows that nothing like a subtle approach is going to have an effect. 

Sometimes the best tool for the job really is a hammer.