Tuesday, December 16, 2014

Why Do Security Sites Penalize Tor Users?

If you are a regular user of Tor, you are already on an NSA watch list. That came out back in July. OTOH, being on an NSA watch list is not a very exclusive club: all you have to do to qualify is read Linux Journal. That came out in July as well.

Tor development, IIRC, was originally funded by the US Navy, and received additional funding from the State Department. It was useful for dissidents living under repressive governments. You can probably fact-check me at on About Tor, with no additional penalty, because NSA are likely to be targeting most likely readers of this blog.. Systems or network admin? Check. Encrypted mail user? Check. Ad nauseum.

Is startpage.com on the side of light?

Startpage.com bills themselves as  "the world's most private search engine", and are the default search engine of Tor. But if you use Tor, you will be periodically presented with a CAPTCHA. On the page, you will see the following text.
As part of StartPage's ongoing mission to provide the best experience for our users, we occasionally need to confirm that you are a legitimate user. Completing the CAPTCHA below helps us reduce abuse and improve the quality of our services.
Thank you,
The StartPage Team
But I have never seen this using Firefox, upon which Tor is based.

What about that symbol of rebellion and hackerdom, BlackHat?

I am not a fan, for reasons that seem good to me. But no security worker can ignore the storied history of this conference. For those with short memories, BlackHat 2009 was when Moxie Marlinspike, Dan Kaminski and Mike Zusman, in separate presentations managed to collectively beat SSL/TLS to death.

Yet Tor users will see something that is probably a bit familiar.
One more step
Please complete the security check to access www.blackhat.com
aaaaand... Another CAPTCHA.

The Worst Thing is Teh Stoopid.

CAPTCHA is far past any sort of relevance. Mechanical Turk CAPTCHA-solving was available years ago. Neither faster timeouts nor more obfuscated puzzles have fixed the problem. At this point, I can only characterize them as both increasingly annoying, and increasingly useless.

Google, whatever you may think of them from a privacy standpoint, recognizes this, and has introduced RECAPTCHA.Though this entire approach is fundamentally flawed, this is at least temporary and partial fix. Now, if only sites that choose to market themselves as either secure Internet tools, or security-focused, would just suck a bit less, I am sure we all appreciate it very much.

Monday, December 15, 2014

Today Was an Infrastructure Day

Sometimes we are all under the hammer of time, and things have to happen Right Now. But now and then most of us get lucky, and slack time. I treasure those days, because I can drag out that mental, digital, or physical TODO list of things that need to be done for the future. And I pound on it, because that it some of the most interesting work that I do, and there is always a reward of some sort.

Most times, it involves working on infrastructure; the scaffolding that we have to have in order to keep doing what we do, only better. So slack time doesn't mean go sit on a beach. Which is just as well, in my case. This is December, and Oregon almost perfectly fails to resemble Hawaii. I touched on this back in August, in Optimize for the Exploration of Ideas.

So, What Did I Do?

I organized some stuff. I have a directory named 'REDACTED' (naming specific directories and what they contain is a huge information leak, if you care about security) that accumulates ideas, design notes, TODOs, etc., on where various internal projects need to go. It can accumulate a lot of cruft, and lose value as a planning tool. It needs all the care I can give it, and I measure success in this at least partially in how much useless crap I deleted.

I rebuilt some stuff. Because sometimes it only takes a few changes in a tool (either physical or software) to radically improve your capabilities. The bad guys are innovating at a tremendous pace, with the value and speed benefits that innovation always provides. If we cannot more than match that, we are hosed. Economics does not much care whether your hat is black or white.

I wrote some stuff. I never intended to become a writer of any description, but I have always admired talented writers. It was writers that turned me into a technologist at an early age, and the power to steer a life is something to be respected. But I follow a couple of writer blogs, and they tend to advocate things like writing a minimum number of words per day, which my life completely fails to allow. I wrote a lot of fragmentary, but hopefully clear notes to myself. I wrote this post, and hopefully moved a few others closer to publication. I also wrote final versions of template files that will change my software documentation workflow completely. Perhaps that should have fallen under 'rebuilt'.

Thursday, December 11, 2014

Good News: Power Failure

Rather expected it, actually. There are high winds blowing in from the Oregon coast, the weather news is full of it, etc.

So there I was, minding my own business, coincidentally thinking about data QA and reaction times, for an entirely unrelated. Which makes for a very sweet coincidence, as now I've pulled data from a couple of scripts I wrote to check the APC UPS. There is frequently a PostGreSQL db server running on this machine, and the combination of databases and unreliable power always ends badly.

That usually happens sooner rather than later, but like most people I tend to put off characterizing what things actually look like as systems fall down. I advocate doing this all the time, to the extent of of periodically killing test systems at the power distribution panel. "Really. Now and then, just replicate into a test environment, and flip the breakers".

That can be a huge pain in the ass, but there is really no other way to be absolutely certain. Cloud is not the answer to this issue. Or, at best, it can only be part of the solution. There are many examples of cloud failure.

Today, I got some great data on UPS drain and recovery, and found a problem with time-stamping of notifications. Discovering that bug in my code is a win. As is jogging me to post on the topic of a bug in the Linux APC UPS monitor daemon. Which I (obviously) have had no control over, and served as an example of why greater care should be taken than previously before turning on SELinux enforcement.

Things Are Going to Go Wrong

As long Murphy is alive and well, and Murphy seems to be immortal, things will continue to go all pear-shaped, at the worst possible times. I almost wrote 'periodically pear-shaped', but we don't always have the benefit of periodicity. Aside from the Big Three of periodic FUBAR announcements (Microsoft, Adobe, and Oracle), anyway. I might justifiably add OpenSSL and other Open Source projects, but the data to back that up is a whole new post. That is not going to happen today. Which is just as well, because the ongoing incompetence of Sony beggars the imagination. I don't even want to think about it, beyond being very happy that I am not on their security team.

Today, As An Example 

From a security perspective, we are most concerned with the CIA triad of Confidentiality, Integrity, and Availability of data. Power problems on database systems will cause issues with integrity and availability, as mentioned above. Confidentiality only becomes a factor if disparate systems with responsibility for authentication/authorization fail open if a remote system is not available. That is rare, these days. Possibly because it is an easy test. So run it, just to be certain. Really. It's just a temporary firewall rule. And, as always, make it a test, so that pass/fail is always recorded.

But, we can never miss an opportunity to get better. Particularly under circumstances so benign as a power outage. Which, for people focused on security rather than pure availability, really is benign.

So, We Are Back to Logs

I first mentioned logs in We Still Fail at Log Analysis back in July of 2013. Nothing much has happened since then to change my opinion. 18 months, and little or no progress on an operational problem that has been with us for time out of mind. That is a bit discouraging, so I feel the need to visit this issue again, and probably not for the last time.

Please look at log policy again. Logging takes many forms, of course. System and application state and performance data are both vital. Were those recorded? Was it possible for an adversary, possibly internal, to avoid detection by shutting down a remote log host, or a network path to that host? In a virtualized environment, do you have records of what machines were spun up or migrated, and the security posture of those systems? If so, are those records amenable to analysis, or are they just data for the sake of data?

That last question is not meant to imply that you may be doing anything obviously wrong, BTW. Effective means, which will stand the test of time, have yet to evolve. I regard this as an open research question. Which is a bit sad, considering how badly the failures have been in legacy environments.

Possibly, for some environments, the future lies solely in data exfiltration detection.

Wednesday, December 3, 2014

Yet More Trouble in Toyland

The spasm of Point-of-Sale exploits this year and last (Target, Home Depot, Subway, Dairy Queen, Jimmy John's, and recently even car parking and washing facilities, etc.) has been enough to do some damage to consumer confidence.

Though these were Point-of-Sale issues, they were network attacks. So if any consumer was frustrated enough to decide that it was probably just as safe doing their holiday shopping online... Oops. And now we have more evidence, if any were needed, that those security seals commonly seen on eCommerce web sites offer less surety than a shopper might be led to expect. In some cases, they can even assist an attacker.

The paper is Clubbing Seals: Exploring the Ecosystem of Third-party Security Seals
Tom Van Goethem, Frank Piessens, Wouter Joosen, Nick Nikiforakis
in Proceedings of the 21st ACM Conference on Computer and Communications Security (CCS 2014). 

It is available to the public at https://securitee.org/files/seals_ccs2014.pdf. It's only eight pages, a nice piece of work, and one example (see page 6) is jaw-droppingly funny. Which is good, because the news is pretty grim, and you will need your sense of humor.

Give it a read. If you are a consumer, quit trusting security seals on Web sites, to whatever extent that you ever did. If you are a site operator, be advised that you may not be getting what you thought you were paying for, if these scans were intended as a component of continuous audit.
Here's the abstract.
In the current web of distrust, malware, and server compromises, convincing an online consumer that a website is secure, can make the difference between a visitor and a buyer. Third-party security seals position themselves as a solution to this problem, where a trusted external company vouches for the security of a website, and communicates it to visitors through a security seal which the certified website can embed in its pages.
In this paper, we explore the ecosystem of third-party security seals focusing on their security claims, in an attempt to quantify the difference between the advertised guarantees of security seals, and reality. Through a series of automated and manual experiments, we discover a real lack of thoroughness from the side of the seal providers, which results in obviously insecure websites being certified as secure. Next to the incomplete protection, we demonstrate how malware can trivially evade detection by seal providers and detail a series of attacks that are actually facilitated by seal providers. Among other things, we show how seals can give more credence to phishing attacks, and how the current architecture of third-party security seals can be used as a completely passive vulnerability oracle, allowing attackers to focus their energy on websites with known vulnerabilities.
The paper also notes that it would be trivial for a shady shopping site operator to dodge the scans these vendors perform, either to outright save themselves mitigation expense, or to give themselves a longer grace period, while still presenting the seal to the public.

Tuesday, December 2, 2014

Law Always Lags, As It Should

The rule of law, instead of the rule of individual persons, is of critical importance. I'm not going to throw in qualifiers, such as 'to Western Civilization', or otherwise defend that viewpoint here; if you don't buy into the concept, you are so very much on the wrong blog.

Now, in an apparent contradiction, let us talk about Western Civilization law, if for no other reason than to leave China and APT threat hype out of the picture. We do many of the same things, after all. State Department warnings about carrying devices into China? We are equally guilty of the same privacy violations. It just doesn't get as much as much press.

The current state of our legal framework lags quite a bit behind the times. Much of this is about politicians, who must be seen to be Doing Something about whatever threat is most in the daily news. Threats, of course, take many forms. Too Big To Fail gets a lot play, for instance. What we should be concerned about is criticality, not size, and these are not necessarily the same thing.

But, I digress.

I Have Hacker Tools, and Know How to Use Them

I am also confident that Oregon law enforcement does not care. Because 'Western Civilization' is not this vast uniform thing. A few years ago, Germany made this illegal, and much Internet drama ensued in the security trade press.

I suspect it has been selectively enforced, if it has been enforced at all. Oregon could pass a similar law tomorrow, and it would pose no threat to me. We have these people known as District Attorneys. They decide who to prosecute, which costs money, they do not have unlimited budgets, and they are not stupid.

I can prove that I'm on the side of the good guys, and have been for years. I seriously doubt that I would need to prove that 'hacker tools' are dual-purpose; I am confident that they get that perfectly well without my having to explain it to them. They are going to be far more interested in going after real bad guys, and will protect the budget that they need to do that.

Fine. I will likely help, pro bono (for the public good). Because living in a state with very low corruption (and I have lived in states, such as Louisiana, where corruption was just assumed) is great, and I do security at least partially because bad actors, arriving over the wire, have caused quite enough human suffering. Frankly, it just pisses me off.

I expect that the very same situation exists amongst pragmatic Germans.

That said, I am concerned in that laws passed in the heat of the moment, selectively enforced, are not compatible with the rule of law. Sadly, this has been seen, even here in Oregon.

Let's Talk WMDs 

Weapons of Mass Destruction. This language evolved from the military NBC (nuclear, biological, chemical) acronym. Simplistic, more understandable to the public, hence conducive to larger budgets, etc. But there has been an Oregon prosecution of a random bomb-throwing idiot, under WMD language.

I am not defending the guy; this was one of the more egregious displays of human butt-headedness in recent local history. But he wasn't exactly the sharpest knife in the drawer; I doubt he had the faintest idea of the horror of true NBC weapons. More importantly, I doubt most people who bought into WMD language do either. A random street bomb-thrower in Portland, Oregon is in no way equivalent to the Enola Gay, and the delivery of the atomic bomb that fell on Hiroshima, whether the first use of nuclear weapons was justified, or not.

Circling Back to My Point

There is a long history of laws being passed because politicians must be seen to be Doing Something. Given the immense (and increasing) amount of lobbying dollars available, and the desperation of candidates to somehow break into the modern news cycle, this seems likely to get worse before it gets better.

People complain that a large segment of law is out of touch with the times. Often it is about their pet peeve, whether that is issues connected with the Internet, such as copyright or net neutrality, or more general issues.

The universal claim seems to be that the law is behind the times. My take is that is better to have law that lags than law that leads. While lagging legal thought will certainly lead to injustice, it is less likely to lead to wholesale injustice. It is the lesser of two evils in an imperfect world.

Monday, December 1, 2014

BTW, Cyber Monday is Bogus

Unless it is a marketing (the art of manipulating people for your own purposes) success. When the Cyber Monday hype started, it was exactly that: hype. No basis in fact. Created in the early days of eCommerce, by marketing droids, as a means of extending (and cashing in on) Black Friday.

Total lie, at the start. So, to whatever extent it has become a Real Thing is a measure of the extent to which people have been manipulated.

Now it gets worse, as the US has managed to export Black Friday as well. What was once a day that many retailers went into black ink (profitable) on the busiest shopping day of the year (day after Thanksgiving, for non-US readers) has now been exported to other countries, which do not share that holiday. Canada. The UK, where there were problems with displays being ripped up in frenzied shopping.

This is the triumph of the marketing droids, and yet another thing that I dearly wish that the US had failed to export. It's right up there with Walmart and fast food. I would include universal surveillance, but the UK has arguably been in the lead on that since the founding of the royal mail, and they still loves them some surveillance cameras.

I mention this because I posted Just Buy Spam Nation earlier today, after first mentioning it back in July, and I do not routinely recommend things. No, this is not some sort of Cyber Monday marketing campaign. These days, cynics are possibly more justified than at any time since Ambrose Bierce penned The Devil's Dictionary in 1906. In this case -- no. But his definition is still worth reading.

A blackguard whose faulty vision sees things as they are, not as they ought to be. Hence the custom among the Scythians of plucking out a cynic's eyes to improve his vision. 

Just Buy Spam Nation

I am still getting traffic to http://fubarnorthwest.blogspot.com/2014/07/you-can-order-pre-order-krebs-spam.html. I'm not sure why that is. The book is out, to good reviews. For those that prefer audio/video discussion, see http://krebsonsecurity.com/2014/11/spam-nation-book-tour-highlights/ where there are numerous links to media that does not specialize in security matters.

Or just generally follow his blog, dammit. He's already back on ATM skimmers, which can be considered as a separate consumer safety area where he has carved out yet another niche as the go-to information source.

Here at casa de FUBAR, things are a bit busy at the moment, with things that will completely fail to interest most of the public, who just want to know how their confidential information became a $1 item in a foreign black-hat market, and what they can do to fix it.

A couple of those issues I will actually get to write about. That is not a common thing, so I am happy when it does manage to happen. But I have to repeat that this is stuff that, unless you operate in the security field, is technical, of little use to you, and will bore you to tears. In short, a waste of your time.

I'll be writing up my opinion of Spam Nation in the near future, but it will have my own twisted twist, in that it will not be a generic consumer review. Those are everywhere, so that shouldn't matter to you, if you are a consumer trying to understand this FUBAR new world. The book completely wins on that score. Really. Just buy it.

What I want to write is a post that discusses why security professionals should regard Spam Nation as important. The book succeeds on both consumer and professional levels. That is more difficult, and as I mentioned, things are a bit busy right now.

Friday, November 21, 2014

Running a Linux Desktop Does Not Equal Security Part 3

Part 1 and Part 2

I have never believed any of the periodic nonsense about "This is the year of the Linux desktop." There are significant deployments, but even 10% market share is probably several years away. Meaning it may never happen. While I like KDE I don't regard this as necessarily bad. As I mentioned previously, obscurity continues to provide at least some measure of protection, from some adversaries.

Of course, the KDE project and probably the majority of KDE users will have a very different idea about the desirability of widespread deployment. So, here is something that might help that along, and I wish some other projects would it as well.

Present a Clean, Reliable Security Advisory Notification Mechanism

Other than in the personal/home use arena, and particularly in environments where there are relatively strict security policies, it is common for deployment teams, and/or any security team which may be tasked with advising them, to need all the notice that they can get. This includes notice which might occur before any update notifications from the final supplier of the software.
This is even being included in some compliance regimes. Here is a quote from the Payment Card Industry Data Security Standard (PCI-DSS), Version 2, Requirement 6.2. This is the October 2010 version, not the latest. I am using the version in which the language first appeared, in order to demonstrate that this is hardly a recent thing.
While it is important to monitor vendor announcements for news of vulnerabilities and patches related to their products, it is equally important to monitor common industry vulnerability news groups and mailing lists for vulnerabilities and potential workarounds that may not yet be known or resolved by the vendor.

Patching deadlines, by severity, are also quite common. The very first version of PCI-DSS established a one-month deadline for critical patches.

According to KDE Security Policy BugTraq and kde-announce@kde.org are the list announcement venues. I was unable to find the notification browsing the BugTraq archives, though I looked well back into October. I found a reference to Konversation, but that was posted by the Debian Project, not KDE. It looks as if the published Security Policy is not being followed in this respect.

While the appropriate announcement was present in the kde-announce archives, searching those kde-announce archives was problematic, in that using a subject of 'security', it turned up only the latest result.

Also according to KDE Security Policy, "All security alerts are published on http://www.kde.org/info/security/." That page currently contains about 80 alerts, so it seems a more reliable data source. Which means security teams should probably write a scraper/parser/notifier for it. And compare it's output from final-supplier notification channels over time. Trust is built slowly, especially when there are existing problems.

Given lack of timely patching continues to cause an enormous amount of trouble for all concerned (it has been responsible for a large number of breaches) it would have been nice if I could have delivered a glowing report regarding KDE. Or at least their notification system. Unfortunately, I can't.

Thursday, November 20, 2014

Running a Linux Desktop Does Not Equal Security Part 2

November 11, 2014 update:
Part 3 is available.

Yesterday I posted Running a Linux Desktop Does Not Equal Security ending under a heading of Even a Less-Popular Desktop, on a Less-Popular OS, Does Not Equal Security. I claimed that the sweet spot of combining a technologically advanced Linux desktop with the very real security advantage of obscurity is now over.

This has much to do with your expectations of who your attacker might be. In other words, your security model. I hope that your model is evolving as rapidly as the threat, but there is much evidence, (the failure of various compliance regimes, such as PCI comes to mind) that in general terms, security models suffer from significant lag.

For instance, if you are reading this post, you are likely being targeted by the NSA. I almost certainly am. While I find this beyond annoying as hell, that doesn't matter in this context. What matters more is NSA are past masters at traffic analysis, and are known to target systems administrators, anyone connected with crypto, and even readers of Linux Journal. That probably covers the vast majority of my audience, if “vast” is an appropriate term for a subset of very small set. A sliver of a sliver, if you will.

The thing is, NSA has the capability, and are known to exercise it. So, I have to classify an agency of my own government as an adversary. Insert Grrrr, howls of FUBAR, etc., but that is not the point here. At least these Bad Guys are Our Bad Guys. The United States is not the only nation-state in the world, and the resources that nearly any nation-state can bring to bear are huge. Even tiny hermit-kingdoms such as North Korea can field significant resources, and may have significant reasons, such as a lack of foreign currency, to do so.

It gets worse when you consider that organized crime is now building secure commercial exchanges, offering support for crimeware such as banking trojans, etc.

The resources available to adversaries have now become large enough that state-, quasi-state, and non-state threat actors can target even tiny slivers if they judge, for reasons that seem good to them, that there is a benefit. A successful attack, even if launched through a flawed rationale, is still a successful attack. Obscurity has lost some value. How much depends, again, on your security model.

Back to KDE on Linux

That is quite enough (probably to much) background. No commentary on overall Linux security, though this could certainly serve as an example of more widespread problems. That would be a whole other flight of posts. Let's get specific to KDE on Linux.

A few days ago, a problem with KDE Workspace surfaced. The relevant bits of
are (tpyos and all):

An application can gain root priveledges from an admin user with either misleading information or no interaction.

On some systems the user will be shown a prompt to change the time. However, if the system has policykit-desktop-privileges installed, the datetime helper will be invoked by an admin user without any prompts.

This is a code-injection attack, leading to privilege escalation., and KDE rated the risk as medium (whatever that means). I have several problems with this.

Some might think that this attack seems local, as opposed to something that can happen over the wire, hence not important on a laptop or workstation. The problem with that line of thought is that modern attacks are chained. The compromise of an unprivileged user account can be devastating, given how much sensitive information might be immediately compromised, and the historic difficulty of knowing the amount and sensitivity level of compromised data.

The trend is overwhelmingly compromise of user accounts via the Web browser, commonly the atrocity that is nearly any Adobe product that will run on Linux, Java plugins, etc. All the usual suspects, one of which gave me a foothold, albeit as 'only' your unprivileged user account. Because you weren't browsing the Web as root, because that would be supremely stupid.

Look at the above quote again, and consider the subtleties of prompting. As an attacker, who now has your privileges, I can to write into your KDE configuration files. I can now change the settings of your desktop clock to display time in something other than your local time zone. In short order, you will likely notice and be annoyed by this, and reset the time. Your system is now pwned, and I did not need to raise any sort of prompting dialog box, which might arouse suspicion. I only had to wait for you to prompt yourself, and contrary to the KDE alert, policykit-desktop-privileges was not required.

As a malicious root user, I will now be much harder to even detect. I'm in your kernel, or writing to flashable memory on your network card, or whatever. Eradicating me from your network might involve feeding the system into a shredder. And yes, you can shred entire systems these days. It's an expensive but available service. Desperate times, and all that.

Sadly, I Am Not Done Yet

Once again, this post has gone on for too long, other things need doing, and I haven't even touched on what might be done about the problem. I am going to have to do a Part 3, and yes, I suck as a blogger. Sorry about that. I am trying to suck less, if only because now I will have to edit the entire flight of posts on this topic to include links to updates and/or previous posts. Lack of expertise always carries a penalty.

November 11, 2014 update:
Part 3 is available.

Wednesday, November 19, 2014

Running a Linux Desktop Does Not Equal Security

Update Thursday, November 20, 2014
Part 2 is available.
November 11, 2014 update:
Part 3 is available.

For years, there have been a lot of silly things written by 'fans'. The Microsoft people versus the Mac people, etc. It took years to get the Mac people to shut up, and it may take more years to get the Linux crowd to similarly STFU.

Much of this is about Security by Obscurity. When Macs were rare, they were seldom attacked. Arguments were mounted about the inherent superiority of the BSD-derived OS, etc. These days, both operating systems are being hacked left and right, so the point is moot.

Still, the Linux fans persist, in some circles, with that same inherent superiority argument. There are valid reasons to favor Linux, but this is not one of them. Linux is being being hacked left and right as well, and 'fan' behavior is just random Internet Drama. Otherwise known as noise.

Security by Obscurity is often derided by the clueless as something to be avoided at all costs. Let's put that to rest straight off. It's an entirely valid defense, as evidenced, for example, by the well documented reductions of attacks resulting from the running of SSH servers on ports other than 22.

Security by Obscurity becomes a problem when it constitutes the majority of a defense strategy. If it is your sole defense, I am very glad that I am not you.

So What is Obscure?

The Linux desktop is, of course, fragmented. For years, Gnome ruled. Some of this was due to freakish historic accident: Linux taking market share from commercial UNIX, etc. At one point HP announced that HP-UX would feature Gnome as their replacement for CDE. Later this was quietly withdrawn.

The KDE desktop has a long history as well, with a surge in popularity in about 2000, with KDE 2. However, Red Hat chose Gnome.. They had to bottom out on something, as they were all about support, and Gnome had more mind-share.

KDE is not obscure, but to this day it does not have the mind-share that Gnome does. Which is a shame, in some respects, but not in others, precisely because it was less popular.

Even a Less-Popular Desktop, on a Less-Popular OS, Does Not Equal Security

KDE used to be a great work-station trade-off. Technologically advanced, easy to work with, yet almost never exploited. So, similar to the way Mac fans thought of their OS, but with a slightly better grounding in fact.

That sweet spot, such that it was, is now over. This post is already dry, and boring. I'll post a Part 2 tomorrow.

Update Thursday, November 20, 2014
Part 2 is available.
November 11, 2014 update:
Part 3 is available.

Tuesday, November 18, 2014

Expect An Increase in Russian Attacks

Some are probably seeing it already. Others will shortly. This is an easy prediction to make, as attacks arriving over the wire are more deniable, avoid much of the potential for the ugliness of arrests, expulsions of diplomats, and the general mayhem of old-school espionage. They also seem likely to be more generally useful, in terms of cost-effectiveness.

Yet old-school Russian espionage is on the rise, seemingly triggered by geopolitics, particularly the Ukraine debacle in this case, as has happened for centuries. Consider these recent news reports.

Poland expels diplomats for “activities incompatible with their status”, and Russia follows suit. All very old-school. There is also some information from the Czech counter-intelligence agency here.

Goes into some depth about trade relations, and Germany abandoning their former stance of economic pragmatism, the extent of Russia's isolation, and whether that will actually matter.

Reports that German Chancellor Angela Merkel has developed a fundamental distrust of Putin, and is concerned about the Balkans. So Serbia and Bosnia-Herzegovina. Even Bulgaria, just to the east of the Balkans, is a matter of concern.

Russia Has Significant Capabilities

This dates back for many years. I don't want to go all cyber-war here. That concept was at least partially hype from the start. Even the issues of Russian involvement in Estonia (widely held up as the first example of cyber-war) and Georgia are not nearly so clear-cut as they are made out to be by some parties.

That said, a criminal organization know as the Russian Business Network operated with a degree of impunity that would have been impossible without some sort of governmental relationship from 2006-2007 or so. The RBN was highly effective, offering services and software, and spawned many offshoots. It figured prominently in various security vendor reports for a number years dating from that time. It is probably safe to say that knowledge of the effectiveness of over-the-wire techniques has been known very well within the Russian government from at least that long ago.

Who Should be Concerned?

Pretty much anyone. Given the value of economic espionage, potential victims are not limited to, say, defense contractors. Even agricultural forecasts have figured prominently in relations between the US and Russia in the past. Obviously there are good reasons to believe that EU members should be even more concerned, particularly governmental organizations that touch on foreign policy, and business enterprises in the energy sector, or that do a significant export business (whether that currently involves Russia, or not).

The RBN had significant capabilities in the fundamental building blocks that are used to modern attacks, such as malware obfuscation and phishing. These techniques are important because they are proven.
Individuals should be concerned about the broad spectrum of social engineering attacks such as phishing emails, or simple requests for access from someone purporting to be working from home.

Security groups within organizations can mostly only do what they have always done, though hopefully with a bit more effectiveness, given the likelihood of trouble. Monitor for data exfiltration, audit systems and networks for compliance with the security posture you think you have. Patch.

One thing that seems to be little stressed is to speak with the people on the business side of things. You may discover that there is something about current negotiations, competitors, or simply the essential function of your organization (advocacy, etc.) which now makes you a more valuable target.

Raising awareness has historically failed, but it must still be attempted. Which is how this post came to be.

I Wish Spam Nation Had Been Published a Few Months Ago

My pre-ordered copy doesn't arrive until 11/24. It's on sale now, though, and you can get it next-day if you want. Amazon currently shows it to be in first place amongst network security books.

Truthfully, I don't expect it to contain any revelation which might have made all the difference had the book arrived today instead of on the the 24th. I am already well enough acquainted with the situation that it won't probably won't affect me from an operations perspective. Still, you can never have too much much knowledge, and the day will come when what Krebs knows will help me build a business case for better tooling, an idea which should appear in a better training programming, or who knows what.

I'm looking forward it.

Thursday, November 13, 2014

NOAA Can't Predict Weather, Can't Secure Their Systems

NOAA is the The National Oceanic and Atmospheric Administration. It's part of the Department of Commerce, and contains the following "Line Offices"

  • The National Oceanic and Atmospheric Administration
  • National Environmental Satellite, Data, and Information Service
  • National Marine Fisheries Service
  • National Ocean Service
  • National Weather Service
  • Office of Oceanic and Atmospheric Research
  • Office of Program Planning and Integration

Of course, there is a lot more stuff in there. Here are a couple of examples. The National Environmental Satellite, Data, and Information Service (NESDIS) provides feeds to the Navy and Air Force weather prediction systems, and the National Ocean Service (NWS). The NWS operates the Space Weather Prediction Center, which is of interest to operators of communications systems and and/or satellites, and the forecasts that your favorite broadcast news outlet likely reads and embellishes. Plus things like The Storm Prediction Center (useful to those of you in tornado country), and The National Hurricane Center (ditto for anyone on or near the Atlantic or Gulf coasts).

It's important.

I have a bit of a problem with NOAA, or least the NWS piece of it, because they can't seem to predict the weather. I don't expect miracles; accurate prediction for much more than a week in the future is impossible due to the nature of complex dynamical systems with a sensitive dependence on initial conditions. read any good reference on Chaos Theory. Personally, I enjoyed CHAOS Making a New
Science by James Gleick. Chaos Theory was pioneered by Edward Lorenz, a mathematician and meteorologist who was trying to model simple weather on an early computer at MIT.

So, this stuff was invented here in the US. Though lately we have fallen behind, as we have in so many other areas.

I can offer an example of that with the NWS having completely blown my local forecast (not an unusual thing) for the past couple of days with temperature misses on both the high (at least one) and low (2) sides. When they forecast 22°F, and it doesn't even freeze, that is a blown forecast.

It's even important enough that we should keep those systems secure, and indeed the federal government is required to secure their systems by the Federal Information Security Management Act of 2002. Here is U.S. Code, Title 44, Chapter 35,  Subchapter III, § 3541

The purposes of this subchapter are to—
(1) provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support Federal operations and assets;
(2) recognize the highly networked nature of the current Federal computing environment and provide effective governmentwide management and oversight of the related information security risks, including coordination of information security efforts throughout the civilian, national security, and law enforcement communities;
(3) provide for development and maintenance of minimum controls required to protect Federal information and information systems;
(4) provide a mechanism for improved oversight of Federal agency information security programs;
(5) acknowledge that commercially developed information security products offer advanced, dynamic, robust, and effective information security solutions, reflecting market solutions for the protection of critical information infrastructures important to the national defense and economic security of the nation that are designed, built, and operated by the private sector; and

(6) recognize that the selection of specific technical hardware and software information security solutions should be left to individual agencies from among commercially developed products.

For a few years I tracked the report cards, which were created in addition to the Office of Management and Budget annual report to Congress. Here are the results for the Department of Commerce, which 'owns' NOAA.

2003 2004 2005 2006 2007
C- F D+ F D+

Shortly after that the metrics were changed. In fairness, they needed to; threats and defenses were both evolving rapidly. And now most departments are doing at least fairly well. On paper, at least. I have my doubts, given the history of breaches, that they are they doing sufficiently well.

According to the most recent OMB report to Congress (FY 2013). 2328 security incidents were reported to US-CERT from Department of Commerce between 10/1/2012 and 9/30/13.

Is that datum worth anything? Are things reliably reported? According to Chinese hack U.S. weather systems, satellite network (Washington Post, November 12, 2014), NOAA managers are quite capable of covering up a breach which occurred in September, announcing only "unscheduled maintenance" in October, and failing to follow Department of Commerce  policy of notifying the Commerce Department Inspector General law enforcement within two days of any security incident, and notifying law enforcement.

The WaPo piece also mentions that NOAA declined to discuss any of this, or whether or not classified information was compromised. NOAA cited an ongoing investigation for not discussing it, and I am fine with that. But there was likely to have been another reason as well: they are in such horrible shape that they did, indeed could not, know fundamental things.

Two months before the breach, on July 15, 2014, The Office of Audit and Evaluation, part of the Department of Commerce Office of Inspector General, released
FINAL REPORT NO. OIG-14-025-A Significant Security Deficiencies in NOAA’s Information Systems Create Risks in Its National Critical Mission. This report is a bit of a horror story. For instance, 47% of their security control assessments were deficient, "... and may not have provided the AO with an accurate implementation status of the system’s security controls." Note that AO is jargon for 'authorizing official'; the person who signs the Authorization to Operate a government system.

So for some time, NOAA (specifically including the AO) did not accurately know their security posture, and did not know that they did did not know. Which made the Authorization to Operate less useful than a random piece of scrap paper, which is at least not actively damaging.

Coming on the heals of Monday's United States Postal Service breach, this does not inspire trust in government systems.

Thursday, November 6, 2014

It Has Been a Long Couple of Weeks

Aaaand... I can't talk about the vast majority of it. Bummer, but also par for the course. So here are a couple of remarks about network time. Which seems to be something people take a bit too much for granted.
  • I can mount a coherent argument that verifiable, accurate timekeeping is the most valuable service that modern networked systems provide, save authentication and authorization services. 
  • It is likely that many (possibly a majority) of the designers and administrators of the systems that provide accurate network time do not understand common problems with ntpd WRT what they are actually trying to achieve, or evaluating the implications of moving to, say, chrony.
Just saying that for something as fundamental as I regard network timekeeping to be, sane behavior does not involve blind trust. It involves homework. The effort you can expend is of course directly proportional to how you weigh the criticality of network time. This obviously goes to the roots of estimating risk.

If you are confident that you have a good grip on this, and have considered it WRT to say, forensics and recoverability, fine. What I am worried about is that security professionals so often get trapped in a security trade-press news (are we safe from <recent news> questions) cycle. News, for at least a year now, has been worse than usual. That does not mean we can slack off on the fundamentals.

Wednesday, October 22, 2014

Bash: Useful, But Do Not Do Silly Things

One of the first priorities in responding to Shellshock, by vendors, security staff, and admins was to sort out the problems with Web servers. I don't think that was handled particularly well by vendors, but that would be a whole different post; this is about software development.

Some developers routinely do some insanely dangerous things. Here's a Start Page search on web server written in bash. Your results will vary with the time you hit that link, but I find some horrible text.

  • "Bash-httpd is a web server written in bash, the GNU bourne shell replacement. Why did you write it? Because I could. :)"
  •  "A web server written in bash. Contribute to bashttpd development by creating an account on GitHub."

No, i do not want to contribute to your project, nor do I want to create some random server, in an entirely inappropriate language, just because I could. Because that would be FUBAR; this isn't a matter of discovering that your Web server calls bash behind the scenes, and scrambling to recover from the problem. This is writing the whole damn thing in bash. To be fair, some of those sites may be plastered with text that says, roughly, "Don't deploy this, because that would be FUBAR."  Because, security considerations aside, a network listener written in bash is going to be horribly inefficient.

Sadly, I have encountered at least one in the wild. It's a Real Thing. and I have to wonder how many of them also proudly proclaim it in HTTP response headers. Because, you know, some times it isn't enough to be horribly vulnerable. You have to be easily discoverable as well.

That may be insufficiently disturbing, so have a nice 4:22 minutes on YouTube.

As Usual, I Can't Say Much

Because "I am working with example.com, trying to fix their horribly exploitable systems" is not really the done thing. That is so very Duh, but some people seem to expect it. Not sorry to disappoint. While it would be nice to put up a couple of plots in the future, right now I will have to stick with some generalities.

As background, I have written and maintained some bash software that was entirely too large. That was an artifact of being involved with Linux since it was a New Thing. If you were only just moving onto the platform, and did not have a lot of in-house expertise, there was a large temptation to mandate that all code will be in bash. You just spent training dollars so that your admins can maintain init scripts, etc. I've had some responsibility for writing supporting training doc, so I have a pretty good grasp of how that situation can evolve.

As per usual in the enterprise, inertia sets in, code bases become bloated and difficult to change (much less rewrite in a more appropriate language) without serious effort. This is, of course, a common problem no matter what language is initially used, and leads to creaky legacy systems, and mounting maintenance costs. Nothing unusual here, save that shells are worse starting point than usual.

This is still an easy trap for me to fall into, so don't let it bite you as well. My first instinct when exploring new log files (assuming they are text) is to go to a command line.The shell, and native tools, is a fast way to get an initial look at the problem, especially if you redirect or tee output into result files. You can understand the nature of the problem very quickly, and the trap is the ease with which you recycle those exploratory efforts into something long-lived. Boom: instant legacy code. It was never intended to be performant, and will now waste system resources roughly forever.

This is important. I first mentioned it in July, 2013, in We still fail at log analysis, which is about some 2010 results, but is still entirely relevant in 2014.

Are Code Analysis Tools the Answer?

There is no single answer, but they can certainly help. One tool that I use is purely home-brew, and evolved so long ago that I don't remember origins. It certainly predates this 2008 bug I filed against the kate editor in KDE: alerts.xml is poorly ordered, insufficient, and contains a bug (the KDE team did a better fix than I requested, as they also alerted on 'deprecated', which I didn't think to include). Probably by several years. 

The latest version of the tool reports on everything kate highlights, number of comments, lines of code count, and the rest of the things you would expect. It walks scripts calling other scripts, and reports the number of chained files. Those last bits are important, because in larger bash code bases, scripts calling other scripts is a common behavior. It's almost required, for maintainability, but as soon as you do it, you are probably passing things around in the environment, and we have just had a painful lesson in what that can lead to. So I have added a few things to it.

Overall, it is becoming my rough-and-ready guide. to when something needs to be rewritten in a more appropriate language. But it suffers a common weakness of home-brew tools. It won't alarm on things I would never do. Such as writing a network listener in bash.

Tuesday, October 14, 2014

Periodicity, or ShellShock, the Gift That Keeps on Giving

Oxford definition of periodicity: The quality or character of being periodic; the tendency to recur at intervals: the periodicity of the sunspot cycle.

Which is fine, so far as it goes, but we need to go a bit further.

In Linux-land, if we need something to occur periodically, with a high degree of certainty, we use the cron facility, which dates back to classic UNIX, and time out of mind. We also turn off any modifiers that add a random delay. Random delays can be a useful feature if we do not want, for instance, tens of thousands of systems all hitting a server at once. But sometimes that is absolutely what we do want. For instance, random delays can really mess with intrusion prevention systems that alarm on network traffic that occur outside of narrow windows. Well, it messes with the people that have to respond, anyway. The IPS itself, being software, does not care.

Straight off, we have two things to periodically (terrible joke) think about.
  1. Strict periodicity
  2. Periodicity with random delay

There are at least two additional things we might want to consider. Let us get the more unusual (but tremendously useful) case out of the way first.

Suppose that we want something to happen n number of times in a time span of length l. Furthermore, we want the interval between n, n+1, etc., to not be predictable. If you can't imagine the use for such a thing, I invite you to consider Quality of Service (QoS), which can be driven into the code of distributed computation systems as well as contractual agreements that humans may be more familiar with. These can be couched in terms of length of time (l), so being able to specify l, and the number of tests n you want to run in each l, is useful. We might also want n to vary, and to specify the allowed range of n. It's a hedge against cheating, and can yield better statistics. In software, you can carry this to extremes.

Now we have three items in our list.
  1. Strict periodicity
  2. Periodicity with random delay
  3. Periodicity with multiple random delays

To the best of my knowledge, there is no facility in any bog-standard OS that supports this, out of the box. That is a problem, because it has been coded, ad-hoc, innumerable times.

Now, let us add a fourth category. Suppose we want to do something at some point in the future, perhaps even repeatedly, but not on a periodic basis. I do this quite a bit. Some software package or OS is due to be updated, and I want to tell the system to get it at midnight.

In Unix or Linux, we have the 'at' facility for this. I can literally use the term 'midnight'. Or noon. Even teatime, though I don't have to limit myself to those times. I can queue the job months in advance. It's wonderful. I keep a text file of 'at' jobs. The second time I need to do something in the future, it goes into that file. My theory is that if I have needed to do it more than once, reviewing that file might remind me of something that needs to happen. It only takes a moment, and has saved me in the past. I even use 'at' jobs in that file to notify me of conferences. That is very much off-the-wall; sane people don't do that. But it is also a measure of the usefulness of the 'at' facility.

The addition of 'at' gives us the following list.
  1. Strict periodicity
  2. Periodicity with random delay
  3. Periodicity with multiple random delays
  4. Aperiodic

And that is where the bash ShellShock vulnerability bites, yet again. It turns out that the fix for ShellShock broke 'at'.

Think about this. ShellShock was announced (it may, of course, have been known previously) on 9/24. OpenSSH sshd, mod_cgi and mod_cgid in Apache, and various DHCP clients were affected. At least three system calls (of which Linux has far too many) can be vulnerable. In severity, this is comparable to HeartBleed. But it wasn't until 10/4 or so, in the neighborhood of nine days, that a fix for a problem with a major Linux job scheduling facility became available for at least one common Linux distribution.

I could go on and on about this; I have not even scratched the surface. But it is getting late, lunch never happened, and I am one hungry unit.

Thursday, October 2, 2014

A Brief Foray Into the Horrible

I try to stay out of the consumer side of security, for several reasons. Leading that parade is that consumer security is so truly FUBAR that it is difficult to know where to begin.  One possible starting point was when a friend talked me into trying to help her friend, who was having continual problems with having a business PC hacked. It turned out that the source of this person's problems was running Kazaa peer-to-peer file sharing software (itself laden with adware) on a Windows machine, pirating every virus-ridden thing in sight, and not being interesting in Not Doing Silly Things.

At a certain point, you have to think in terms of triage.

  1. some are in no immediate danger
  2. some can be saved if you act immediately
  3. some are doomed no matter what you do

This person was an obvious three.

I know of other people who simply buy a new PC when their current machine grinds to a halt as various bits of botnet malware fight for supremacy. In the meantime they are of course a menace to everyone else on the Internet. These people are also, collectively, threes.

Unfortunately, There is Some Bleed-Over

I once heard a guy (with PCI-DSS in his job title) mention to another person (also working the PCI-DSS the issue) who now had an Internet Explorer start screen inexplicably pointing to some outlandish search site. Apparently neither of these people were able to recognize that browser start page hijacking was a classic indication that your machine wasn't yours any more.

That was a casual conversation taking place by a couple of people walking past my cube. But it sort of jerked my head out of whatever I was doing, and I found the guy they were talking about connected to a client network, as he chatted with them about some problem they were having. Nor would he disconnect, despite my desperate hand-waving and other futile attempts to silently communicate that his machine was infested, and he should not be connected to a client LAN. Though it was likely that any damage was likely done, at that point.

The site Security Officer (I was a mere consultant) had an office a very few steps away, so bursting into a meeting was enough to get the problem handled. Except that it turned out that there was no local experience with credential-stealing, etc. I don't know how it all worked out in the end. I suspect that nobody wanted to know.

This is Very Bad News

Four people were involved in this. The two having the conversation, the guy with infected machine, and me. Only one had a clue, but all were systems administrators, or specifically had 'security' or 'compliance' in their titles.

It has always been hard to find security people. It's hard to even define the term, given the breadth of the field. Reasonable people can argue either side of the question of whether or not PCI-DSS has been a failure, and that is, after all, a very narrow corner of the field. However, a certain amount of consumer-level security awareness is clearly lacking, even amongst those with security in their job description. So, at some point, I have to go there.

So, Changes

I'm hoping (probably with no prospect of success) to cheat a bit by doing a bit of rearranging of fubarnorthwest. It was always a bit strange for me to link to physics blogs instead of security blogs. There was a reason for doing that, but I never wrote the explanatory post(s), and without those it seems, well, insane. As a blogger, I suck. But my goal is to suck less, so those are going away.

In their place, I'm adding the first consumer-oriented security blog. That would be Krebs on Security. Unlike me, Brian Krebs is a blogger who does not suck. I have mentioned him before in Java Security Revisited--Part 1 and You Can Order Pre-order Kreb's Spam Nation Now.

There will be other changes.

About that Credential-Stealing Thing

Pony, to take a common malware example, is a piece of malware that is still called a downloader--something used to fetch malicious payloads onto a compromised machine. It is also a product, albeit one produced by the Bad Guys. As such, features were added, and by 2012 it was also quite the accomplished credential-stealer for Windows. It has become far more powerful since, adding crypto-currency capabilities, and much else. Looking back into my notes, I would like to present a list of the Windows software that Pony could steal credentials from, as of 2012. There were likely to be others even then, there are certain to be more now, and of course this is only one piece of malware, amongst many.

32bit FTP
Bromium (Yandex Chrome)
BulletProof FTP
Chromium / SRWare Iron
CoffeeCup FTP / Sitemapper
CoffeeCup Visual Site Designer
Comodo Dragon
Directory Opus
Easy FTP
FAR Manager
FastStone Browser
FreeFTP / DirectFTP
Frigate3 FTP
FTP Commander
FTP Control
FTP Explorer
FTP Surfer
FTP Voyager
Global Downloader
Google Chrome
Internet Explorer
Notepad + +
Odin Secure FTP Expert
System Info
Total Commander

Managing high-surety systems from lower-surety systems is an idea assembled from 100% FAIL. But if you must do this, being able to spot at least the most blindingly obvious indicators of compromise is a skill you need to have.

Wednesday, October 1, 2014

Never Trust People Who Make Blanket Statements

To forestall counter-rants, that title is technically termed 'Delicious Irony'.

There are HTML entity names and numbers defined for some tiny little things in circles. &reg; or &#174; for Registered Trademarks, &copy; or &#169; for Copyright. They don't seem to work on blogger.com, at least in Preview, which is one more argument against using this environment, though there is likely some secret sauce you can apply if you are willing to be locked in. That was probably worth a small amount of snark, but I have to blow it off in favor of The Greater Snark.

What we could really use is a capital I in a tiny circle, defined as Irony. Because people seem to have a huge problem with recognizing it, even when they share the same language and cultural background, and even if it squats on their heads and barks. I have no definitive idea of why this is so, though I tend to think that John Scalzi showed a more than a bit of insight at http://whatever.scalzi.com/2010/06/16/the-failure-state-of-clever/.

On to Serious Security Stuff

Because I really do have a point to make. Two, actually.

Months ago, I ran across an article focused on 'we all love JavaScript'. node.js, ubiquitous tool on either side of the connection, love, love, love. 'We all' should have sent up a lot of warning flags--perhaps to the extent of 'I can stop reading now.' It is so horrendously hard to stay informed, in the current security landscape, that reasons to stop reading may be more useful than reasons to keep reading.

First Point

This is a language which was created in 1995, which contained a Y2K bug. It was a very silly time to create a language with short- versus long-dates. It was a time when when most Web sites were entirely static, Java applets could not be effectively downloaded over the current average bandwidth, and the quest for interactivity was on. The very name Javascript, which has no relation to Java, was all about marketing to this desperate audience. There is another rant in the works about marketing. I'll change this to a link when I post it.

Regarding Y2K: this was not nearly such a non-event as people (and trade press) who rate everything on an Internet Drama scale seem to think. The fact that Y2K had very little effect was more a measure of the vast resources expended on fixing the problems, and how effective those measures were. It was a huge win, but lacked Internet Drama, so it is now widely regarded as hype. Nothing could be further from the truth.

Second Point

The existence of http://shop.oreilly.com/product/9780596517748.do (JavaScript: The Good Parts).

The capsule description reads:
"Most programming languages contain good and bad parts, but JavaScript has more than its share of the bad, having been developed and released in a hurry before it could be refined. This authoritative book scrapes away these bad features to reveal a subset of JavaScript that's more reliable, readable, and maintainable than the..."

O'Reilly has arguably done huge damage to Internet security from their beginnings, when they coined the term 'LAMP'. Linux, Apache, MySQL, and PHP. The latter two components of which have not been, shall we say, filled with bliss, over the years. But love them or hate them, they do publish important titles, and this was one of them.

So, Is Everything FUBAR?

In broad strokes, yes. Things are generally FUBAR, in the general case of the overall security landscape. It has never really been otherwise. This is not necessarily true in every case.

There are instances where large Javascript libraries are deployed, unvetted, for no better reason than skinning a Web site. I will note that the ability to choose your color scheme seldom has anything to do with color-blindness issues, which would at least be a usability win for a surprisingly (to me, at least) common problem. OTOH, other libraries are deployed for reasons that are far more important than skinning (think financial institutions), and where vetting is just not done. The median is probably somewhere around MathJax, which is non-frivolous, is not widely deployed in sensitive consumer-facing applications, and is just cool as hell.

But history demands that we presume the worst case, and we need rock-solid analysis tools, the output of which we can walk up the management approval loop.

To Return to the Theme

Blanket statements are deserving of suspicion. They are probably a good reason to stop reading any Internet content, whether from a mainstream news outlet or social media. If you see statements beginning with, for example

{everyone|no one|we all}

and ending with (again, for example)


there is likely to be a problem with the content. It may be a simple lack of critical thought, but it could also be the advancement of a hidden agenda, for corporate, political, or other purposes. Propaganda, IOW. Marketing. Or perhaps you are only paying attention to fora exclusively populated by people who believe exactly as you do. Which is the group-think problem, taken to the limit, and one of the problems that the Internet has delivered to all of us.

Friday, September 26, 2014

Weird Little Details Matter

I would love to extensively write about Shell Shock, the latest vulnerability-with-a-brand-name. I do have a few hours invested in it, but this is the sort of thing that can be difficult to approach.

If I were mitigating this on a gig, I likely couldn't mention much; certainly nothing that could identify the client in any way. Because ethics. I am fine with that. First off, it indicates that some organizations that I care about have matters in hand. Good on them; this is not a simple thing to do.

Secondly, I get to sit back and take notes on things like the ugly beginning of this thing, how rapidly the exploit attempts began, how rapidly affected systems could be identified (a shell is used in places which might surprise you), speed and completeness of vendor response, etc. This is good data to have, and if I were neck-deep in an operational security work flow, prioritizing systems by criticality, etc., I wouldn't have it.

That work is a sunk cost; it needed to be done in any event, for professional reasons, whether it ever provides any direct reward or not. That is just the nature of the business; you do not get paid for everything you do, but it still has to be done.

It Would Have Been a Busy Week Without Shell Shock

It has been difficult to post lately. There is so very much going on right now. Some of it is structural; this the end of a major part of the Conference Season, and there have been important results. Some is just in the nature of the weird little details that crop up now and then.

As an example of the weird little things, consider tamper-evident labels. These can greatly simplify the life of a security practitioner, from physical inventories (in combination with bar code scanners) to defending against hardware keystroke loggers (the variant that is placed inside the keyboard), and more. In 2008, I was recommending a line available from Grainger, but by 4/6/11, it had become unavailable.

TE Connectivity has a nice line, but at least some of it requires thermal-transfer printing, and lot sizes start at 10,000 units, depending on just what you are after. Good to be aware of, but I am not closer to finding an industrial supply house who can supply a useful product in lot sizes from the hundreds to the low thousands. I need to be, in that "No problem, go here" is a lot more useful than "I've been working on it".

So there you are, from broad strokes to one (there have been others this week) example of a weird little detail. I often think that weird little details are the more important, and I can offer Shell Shock as an example.

Tuesday, September 16, 2014

Defense In Depth: 2500 Years and Counting

That 2,500 year number is probably conservative. Funding issues, for those security worker-bees trying to deploy along the lines of a defense in depth strategy, may be even older. It seems likely to me that the first bright spark who thought of it either could not get the tribal elder to agree, records from more than 2,500 years ago have not survived, or (more likely) I have suffered a research failure.

This image is of a defense in depth deployment, c. 500 BCE. 2500 years ago, at DĂșn Aonghasa, County Galway, Ireland. The Iron Age. Brutal weapons, very little medical knowledge, and a life expectancy of 26 years.

It is probably safe to say that defense mattered to these people, on a level more fundamental than identity theft, problems with current near-field payment schemes, or any other current IT security concern. Being hacked by an iron sword has more immediacy than being hacked by a network intruder. The prospect of a horribly painful death tends to focus the mind on what actually works.

Note that

  • No military (collectively, they know a thing or two about horribly painful death) of any nation, has ever had a problem with the value of a defense in depth strategy
  • Even the militaristic United States of 2014 has funding problems

Monday, September 15, 2014

A Problem With IRC Chat

A lot of discussion related to development and support in the Open Source world happens over IRC. If you are part of that community, this may be relevant to you.

[16:04] [MOTD] - **************************************************************
[16:04] [MOTD] -                       SECURITY ALERT
[16:04] [MOTD] -
[16:04] [MOTD] - Over the weekend of 13th-14th September freenode staff noticed
[16:04] [MOTD] - some compromised binaries present on a number of servers.
[16:04] [MOTD] - The servers in question have been removed from the network and
[16:04] [MOTD] - shut down.  However, it's possible that network traffic  -
[16:04] [MOTD] - including SSL traffic - has been sniffed and passwords
[16:04] [MOTD] - exposed.
[16:04] [MOTD] -
[16:04] [MOTD] - We therefore recommend that all users change their nickserv
[16:04] [MOTD] - password(s) to a new value which is not shared with any
[16:04] [MOTD] - other service.
[16:04] [MOTD] -
[16:04] [MOTD] - You can do this with /msg nickserv set password newpasshere
[16:04] [MOTD] -
[16:04] [MOTD] - Please note that investigation is ongoing to discover the root
[16:04] [MOTD] - cause of the attack, and until this investigation is complete
[16:04] [MOTD] - we cannot be 100% certain that all traces of the compromises
[16:04] [MOTD] - have been removed. We may have to ask you to change your
[16:04] [MOTD] - passwords again after analysis has completed.
[16:04] [MOTD] -
[16:04] [MOTD] - Further details will appear on https://blog.freenode.net/

As an aside: not posting for a while (things are busy) seems to have foxed those annoying +1 bots for the moment. The ones that +1 every post you make, seconds after submission. Alas, it won't last.

Monday, August 25, 2014

Optimize For the Exploration of Ideas

This applies to office/lab/shop/whatever design and maintenance, and the idea is simple:
minimize the time and effort overhead in moving between the inception of an idea and starting to develop it.

We all have different tolerances for this, and it varies by the value of the idea, and the immediate impact it has. In my case, sometimes a Post-It note will do. Other times, I need to start Right Now, when my brain is on a fast boil.

I have found those fast-boil cases to be not only the most useful, but the most fun. I ignore those famous quotes about a clean desk being a sign that you aren't doing anything interesting or useful. 

Being unable to find either a tool or information, having to clear either a physical or electronic work space, or anything similar to those things, can be a huge loss. Staying organized is a constant battle for me, but every minute or dollar I have ever spent in the effort has proven to be a useful investment. Seriously. In my experience, there have been no exceptions to this rule, even when the time, effort, or money required has been large. When the resource expenditure required to clean things up is largest is exactly when you get the most benefit from getting things sorted.

I want to be able to find a file (physical or electronic), a drill (as in the physical tool), space to deploy a new instance of a server or application Right Now. I optimize for the exploration of ideas.

Friday, August 22, 2014

Non-Slacker Friday--While Stupid and Lazy

This morning I woke up with a room-temperature IQ. Heh, it happens to the best of us, so it certainly happens to me as well. Luckily I've also been incredibly lazy all morning, which has (hopefully) prevented me from energetically doing stupid things. This has nothing to do with Friday, per se, as I have no traditional fixed schedule in which a Monday through Friday work week is followed by a weekend. Like everyone else in IT I am, for better or worse, interrupt-driven. This is hardly limited to security workers.

So, that whole waking-up-stupid-and-lazy thing is more in the nature of something that just happens now and then; I form no hypothesis as to why. Still, there are a couple of things I can write up that (again, hopefully) do not require a great deal of thought. Both are related to community service, which of course take many forms, from the purely physical, such as my telling a neighbor yesterday that there was a family of river otters (two adults, three kits) playing in the Willamette River behind casa de Greg (great fun to watch—search YouTube if you doubt it) to the virtual or professional.

Upon those River Otters hangs both a tail (I thought they might be beavers until I saw one) and a tale. The tale is about virtual and/or professional communities, databases, SELinux, and how I came to see them. It goes like this.

Very early Wednesday morning, I had a rare summer power outage. Given the timing, and the number of sirens I heard a short time later, it seems likely that someone hit a power pole. This wasn't an immediate problem, as I was on a Linux workstation protected by an APC UPC. Calibration data and a bit of testing led me to expect between 30 and 40 minutes of life, under reasonable loading, to save work, write whatever notes were required to maintain mental state, and do a clean shutdown if necessary. Given my power-pole hypothesis, this seemed likely, and I could track UPS state as remaining time faded via a trivial bash script.

$ cat apcrpt
# apcrpt: Quick look at the APC UPS.
apcaccess | egrep "(STATUS|LOADPCT|TIMELEFT)"

Here is a sample run, taken a moment ago, under a very light load:
$ apcrpt
Fri Aug 22 10:45:24 PDT 2014
LOADPCT : 8.0 Percent
TIMELEFT : 89.5 Minutes

When I became uncomfortable with remaining time I shut the system down, and walked down to the river. Hence River Otters and, as luck would have it, turning an annoyance into a Very Cool Thing.

Note the disparity in that moments-ago look at TIMELEFT and what I usually anticipate. It comes down to this workstation usually having a database server running when I am working, with databases being of varying criticality, from the completely trivial to recreate, to a couple of others which are somewhat to vastly more more likely to cause me potentially large problems in the event of data corruption.

It is those more critical databases which prevent me from running the db server at all times, even though there are ample system resources to do it, and it would be most convenient. See https://bugzilla.redhat.com/show_bug.cgi?id=1096484.

A bug in SELinux prevents a complete and clean shutdown of both the UPS and the workstation, which is my minimum requirement. I reported this in May, and there is no fix as of yet. It seems likely to also impact UPS hardware lifetime, as it can drain batteries completely flat. Which is another reason I wish it were fixed. Absent a fix, I manually start and stop the db server, which is not an adequate work-around.

Hardware lifetime issues aside, running databases on systems with unreliable power is a recipe for potentially disastrous results, which can make hardware expenses trivial in comparison. It is somewhat ironic that so much attention has been devoted to making cluster solutions robust in the face of node failure, but seemingly very simple things can fall through the cracks. Not that I said 'seemingly'. This might be a complicated issue. Worse yet, it might be complex.

But Wait, There's More

The next SELinux bug is https://bugzilla.redhat.com/show_bug.cgi?id=1130819. It's a bit weird in that when I tried to report a bug against policycoreutils-sandbox, Red Hat Bugzilla didn't recognize this as a valid component. More experienced bug reporters have doubtless run into this problem, but how to deal with it has not made it into anything that is easy to find.

My concern was that this is about Chrome; sand-boxing Google Web browsing technology. Yes, Google has made much of sand-boxing as a native security technology. But skepticism is one of the traits of security people. First off, sand-boxing has a terrible track record. The technology is getting better, but it's not yet a reliable technology in any context, and it has to operate in a very dangerous environment—running foreign code in a sensitive environment.

It is appropriate to mention that Chrome was insecure from the day of that it launched, out of the blue, on 9/1/2008. As I reported at the time, it was based on an old and vulnerable version of WebKit, and sure enough, one day later ZDNet reported Google Chrome vulnerable to carpet-bombing flaw
http://www.zdnet.com/blog/security/google-chrome-vulnerable-to-carpet-bombing-flaw/1843. Uncritical, fannish attraction to any particular Web browser is something that really should be discussed in any modern security training program. So please do that.

There is Still More

So far, this has about contributing to the community of Linux users. That is a useful thing to do, but there are other communities, such as professional organizations, such as ACM. I am not going there today, though I mentioned it earlier. Its complicated, the implications are important, and this is already getting into the area of a thousand words. This is quite enough for a lazy day.

Saturday, August 16, 2014

Early Saturday Morning

Starting at 0130. That's 1:30 AM for you people that don't use 24-hour clocks.

I have a habit of mentally filing nagging problems away, to sleep on them, so to speak. That involves obvious scheduling problems, as sometimes they get slept on for days or weeks. A less obvious problem is that sometimes a solution, or at least the next step toward a solution, prefers to wake me, rather than present itself in a nice orderly manner, when I wake up as usual.

I am fine with that, in that it feels like my subconscious just told my waking mind, “Allow me to surprise you with this delicious cookie.” However, eating the delicious cookie can be a lot of work. In this case, I didn't get a solution, but the next step. Five hours invested in writing some exploratory code, which looks promising, and I was about at a natural stopping point. No solution, but I'm confident that I have the next step. So, it's a win, even though it has messed up my weekend a bit.

It's just as well that I was at a natural stopping point, because the sleep rule on my phone expired, and notifications happened. Most were private, or of no possible interest to you, or both. But before I go sit on the beach (it is Saturday morning, after all) I'd like to point to a G+ post from
which points to Top 10 mathematical innovations at 

The comments are interesting. I tend to agree with the first one. “This article takes a very narrow view of what "mathematics" means.” … “But this list virtually ignores the past 264 years.”

First off, the context is missing. Was this an innovation mostly important for the field of mathematics, the usefulness of the innovation to society, or what?

Geometry is not on the list, though non-Euclidean geometry is at #7. But geometry was important to ancient Egyptian civilization (building, surveying, etc.), ancient astronomy, etc.

Statistics is not on the list, though it has enabled huge advances in modern manufacturing, through statistical process control. It's a vital component of modern science and engineering. It has also enabled politicians and marketers to lie in innovative ways, to such an extent that I have come to believe that statistics should replace trigonometry, or at least be on offer, in US high-school education.

And now it is time to go sit on the beach for a bit. Have a great Saturday, or at least a cookie.