Wednesday, October 22, 2014

Bash: Useful, But Do Not Do Silly Things

One of the first priorities in responding to Shellshock, by vendors, security staff, and admins was to sort out the problems with Web servers. I don't think that was handled particularly well by vendors, but that would be a whole different post; this is about software development.

Some developers routinely do some insanely dangerous things. Here's a Start Page search on web server written in bash. Your results will vary with the time you hit that link, but I find some horrible text.

  • "Bash-httpd is a web server written in bash, the GNU bourne shell replacement. Why did you write it? Because I could. :)"
  •  "A web server written in bash. Contribute to bashttpd development by creating an account on GitHub."

No, i do not want to contribute to your project, nor do I want to create some random server, in an entirely inappropriate language, just because I could. Because that would be FUBAR; this isn't a matter of discovering that your Web server calls bash behind the scenes, and scrambling to recover from the problem. This is writing the whole damn thing in bash. To be fair, some of those sites may be plastered with text that says, roughly, "Don't deploy this, because that would be FUBAR."  Because, security considerations aside, a network listener written in bash is going to be horribly inefficient.

Sadly, I have encountered at least one in the wild. It's a Real Thing. and I have to wonder how many of them also proudly proclaim it in HTTP response headers. Because, you know, some times it isn't enough to be horribly vulnerable. You have to be easily discoverable as well.

That may be insufficiently disturbing, so have a nice 4:22 minutes on YouTube.

As Usual, I Can't Say Much

Because "I am working with, trying to fix their horribly exploitable systems" is not really the done thing. That is so very Duh, but some people seem to expect it. Not sorry to disappoint. While it would be nice to put up a couple of plots in the future, right now I will have to stick with some generalities.

As background, I have written and maintained some bash software that was entirely too large. That was an artifact of being involved with Linux since it was a New Thing. If you were only just moving onto the platform, and did not have a lot of in-house expertise, there was a large temptation to mandate that all code will be in bash. You just spent training dollars so that your admins can maintain init scripts, etc. I've had some responsibility for writing supporting training doc, so I have a pretty good grasp of how that situation can evolve.

As per usual in the enterprise, inertia sets in, code bases become bloated and difficult to change (much less rewrite in a more appropriate language) without serious effort. This is, of course, a common problem no matter what language is initially used, and leads to creaky legacy systems, and mounting maintenance costs. Nothing unusual here, save that shells are worse starting point than usual.

This is still an easy trap for me to fall into, so don't let it bite you as well. My first instinct when exploring new log files (assuming they are text) is to go to a command line.The shell, and native tools, is a fast way to get an initial look at the problem, especially if you redirect or tee output into result files. You can understand the nature of the problem very quickly, and the trap is the ease with which you recycle those exploratory efforts into something long-lived. Boom: instant legacy code. It was never intended to be performant, and will now waste system resources roughly forever.

This is important. I first mentioned it in July, 2013, in We still fail at log analysis, which is about some 2010 results, but is still entirely relevant in 2014.

Are Code Analysis Tools the Answer?

There is no single answer, but they can certainly help. One tool that I use is purely home-brew, and evolved so long ago that I don't remember origins. It certainly predates this 2008 bug I filed against the kate editor in KDE: alerts.xml is poorly ordered, insufficient, and contains a bug (the KDE team did a better fix than I requested, as they also alerted on 'deprecated', which I didn't think to include). Probably by several years. 

The latest version of the tool reports on everything kate highlights, number of comments, lines of code count, and the rest of the things you would expect. It walks scripts calling other scripts, and reports the number of chained files. Those last bits are important, because in larger bash code bases, scripts calling other scripts is a common behavior. It's almost required, for maintainability, but as soon as you do it, you are probably passing things around in the environment, and we have just had a painful lesson in what that can lead to. So I have added a few things to it.

Overall, it is becoming my rough-and-ready guide. to when something needs to be rewritten in a more appropriate language. But it suffers a common weakness of home-brew tools. It won't alarm on things I would never do. Such as writing a network listener in bash.

Tuesday, October 14, 2014

Periodicity, or ShellShock, the Gift That Keeps on Giving

Oxford definition of periodicity: The quality or character of being periodic; the tendency to recur at intervals: the periodicity of the sunspot cycle.

Which is fine, so far as it goes, but we need to go a bit further.

In Linux-land, if we need something to occur periodically, with a high degree of certainty, we use the cron facility, which dates back to classic UNIX, and time out of mind. We also turn off any modifiers that add a random delay. Random delays can be a useful feature if we do not want, for instance, tens of thousands of systems all hitting a server at once. But sometimes that is absolutely what we do want. For instance, random delays can really mess with intrusion prevention systems that alarm on network traffic that occur outside of narrow windows. Well, it messes with the people that have to respond, anyway. The IPS itself, being software, does not care.

Straight off, we have two things to periodically (terrible joke) think about.
  1. Strict periodicity
  2. Periodicity with random delay

There are at least two additional things we might want to consider. Let us get the more unusual (but tremendously useful) case out of the way first.

Suppose that we want something to happen n number of times in a time span of length l. Furthermore, we want the interval between n, n+1, etc., to not be predictable. If you can't imagine the use for such a thing, I invite you to consider Quality of Service (QoS), which can be driven into the code of distributed computation systems as well as contractual agreements that humans may be more familiar with. These can be couched in terms of length of time (l), so being able to specify l, and the number of tests n you want to run in each l, is useful. We might also want n to vary, and to specify the allowed range of n. It's a hedge against cheating, and can yield better statistics. In software, you can carry this to extremes.

Now we have three items in our list.
  1. Strict periodicity
  2. Periodicity with random delay
  3. Periodicity with multiple random delays

To the best of my knowledge, there is no facility in any bog-standard OS that supports this, out of the box. That is a problem, because it has been coded, ad-hoc, innumerable times.

Now, let us add a fourth category. Suppose we want to do something at some point in the future, perhaps even repeatedly, but not on a periodic basis. I do this quite a bit. Some software package or OS is due to be updated, and I want to tell the system to get it at midnight.

In Unix or Linux, we have the 'at' facility for this. I can literally use the term 'midnight'. Or noon. Even teatime, though I don't have to limit myself to those times. I can queue the job months in advance. It's wonderful. I keep a text file of 'at' jobs. The second time I need to do something in the future, it goes into that file. My theory is that if I have needed to do it more than once, reviewing that file might remind me of something that needs to happen. It only takes a moment, and has saved me in the past. I even use 'at' jobs in that file to notify me of conferences. That is very much off-the-wall; sane people don't do that. But it is also a measure of the usefulness of the 'at' facility.

The addition of 'at' gives us the following list.
  1. Strict periodicity
  2. Periodicity with random delay
  3. Periodicity with multiple random delays
  4. Aperiodic

And that is where the bash ShellShock vulnerability bites, yet again. It turns out that the fix for ShellShock broke 'at'.

Think about this. ShellShock was announced (it may, of course, have been known previously) on 9/24. OpenSSH sshd, mod_cgi and mod_cgid in Apache, and various DHCP clients were affected. At least three system calls (of which Linux has far too many) can be vulnerable. In severity, this is comparable to HeartBleed. But it wasn't until 10/4 or so, in the neighborhood of nine days, that a fix for a problem with a major Linux job scheduling facility became available for at least one common Linux distribution.

I could go on and on about this; I have not even scratched the surface. But it is getting late, lunch never happened, and I am one hungry unit.

Thursday, October 2, 2014

A Brief Foray Into the Horrible

I try to stay out of the consumer side of security, for several reasons. Leading that parade is that consumer security is so truly FUBAR that it is difficult to know where to begin.  One possible starting point was when a friend talked me into trying to help her friend, who was having continual problems with having a business PC hacked. It turned out that the source of this person's problems was running Kazaa peer-to-peer file sharing software (itself laden with adware) on a Windows machine, pirating every virus-ridden thing in sight, and not being interesting in Not Doing Silly Things.

At a certain point, you have to think in terms of triage.

  1. some are in no immediate danger
  2. some can be saved if you act immediately
  3. some are doomed no matter what you do

This person was an obvious three.

I know of other people who simply buy a new PC when their current machine grinds to a halt as various bits of botnet malware fight for supremacy. In the meantime they are of course a menace to everyone else on the Internet. These people are also, collectively, threes.

Unfortunately, There is Some Bleed-Over

I once heard a guy (with PCI-DSS in his job title) mention to another person (also working the PCI-DSS the issue) who now had an Internet Explorer start screen inexplicably pointing to some outlandish search site. Apparently neither of these people were able to recognize that browser start page hijacking was a classic indication that your machine wasn't yours any more.

That was a casual conversation taking place by a couple of people walking past my cube. But it sort of jerked my head out of whatever I was doing, and I found the guy they were talking about connected to a client network, as he chatted with them about some problem they were having. Nor would he disconnect, despite my desperate hand-waving and other futile attempts to silently communicate that his machine was infested, and he should not be connected to a client LAN. Though it was likely that any damage was likely done, at that point.

The site Security Officer (I was a mere consultant) had an office a very few steps away, so bursting into a meeting was enough to get the problem handled. Except that it turned out that there was no local experience with credential-stealing, etc. I don't know how it all worked out in the end. I suspect that nobody wanted to know.

This is Very Bad News

Four people were involved in this. The two having the conversation, the guy with infected machine, and me. Only one had a clue, but all were systems administrators, or specifically had 'security' or 'compliance' in their titles.

It has always been hard to find security people. It's hard to even define the term, given the breadth of the field. Reasonable people can argue either side of the question of whether or not PCI-DSS has been a failure, and that is, after all, a very narrow corner of the field. However, a certain amount of consumer-level security awareness is clearly lacking, even amongst those with security in their job description. So, at some point, I have to go there.

So, Changes

I'm hoping (probably with no prospect of success) to cheat a bit by doing a bit of rearranging of fubarnorthwest. It was always a bit strange for me to link to physics blogs instead of security blogs. There was a reason for doing that, but I never wrote the explanatory post(s), and without those it seems, well, insane. As a blogger, I suck. But my goal is to suck less, so those are going away.

In their place, I'm adding the first consumer-oriented security blog. That would be Krebs on Security. Unlike me, Brian Krebs is a blogger who does not suck. I have mentioned him before in Java Security Revisited--Part 1 and You Can Order Pre-order Kreb's Spam Nation Now.

There will be other changes.

About that Credential-Stealing Thing

Pony, to take a common malware example, is a piece of malware that is still called a downloader--something used to fetch malicious payloads onto a compromised machine. It is also a product, albeit one produced by the Bad Guys. As such, features were added, and by 2012 it was also quite the accomplished credential-stealer for Windows. It has become far more powerful since, adding crypto-currency capabilities, and much else. Looking back into my notes, I would like to present a list of the Windows software that Pony could steal credentials from, as of 2012. There were likely to be others even then, there are certain to be more now, and of course this is only one piece of malware, amongst many.

32bit FTP
Bromium (Yandex Chrome)
BulletProof FTP
Chromium / SRWare Iron
CoffeeCup FTP / Sitemapper
CoffeeCup Visual Site Designer
Comodo Dragon
Directory Opus
Easy FTP
FAR Manager
FastStone Browser
FreeFTP / DirectFTP
Frigate3 FTP
FTP Commander
FTP Control
FTP Explorer
FTP Surfer
FTP Voyager
Global Downloader
Google Chrome
Internet Explorer
Notepad + +
Odin Secure FTP Expert
System Info
Total Commander

Managing high-surety systems from lower-surety systems is an idea assembled from 100% FAIL. But if you must do this, being able to spot at least the most blindingly obvious indicators of compromise is a skill you need to have.

Wednesday, October 1, 2014

Never Trust People Who Make Blanket Statements

To forestall counter-rants, that title is technically termed 'Delicious Irony'.

There are HTML entity names and numbers defined for some tiny little things in circles. ® or ® for Registered Trademarks, © or © for Copyright. They don't seem to work on, at least in Preview, which is one more argument against using this environment, though there is likely some secret sauce you can apply if you are willing to be locked in. That was probably worth a small amount of snark, but I have to blow it off in favor of The Greater Snark.

What we could really use is a capital I in a tiny circle, defined as Irony. Because people seem to have a huge problem with recognizing it, even when they share the same language and cultural background, and even if it squats on their heads and barks. I have no definitive idea of why this is so, though I tend to think that John Scalzi showed a more than a bit of insight at

On to Serious Security Stuff

Because I really do have a point to make. Two, actually.

Months ago, I ran across an article focused on 'we all love JavaScript'. node.js, ubiquitous tool on either side of the connection, love, love, love. 'We all' should have sent up a lot of warning flags--perhaps to the extent of 'I can stop reading now.' It is so horrendously hard to stay informed, in the current security landscape, that reasons to stop reading may be more useful than reasons to keep reading.

First Point

This is a language which was created in 1995, which contained a Y2K bug. It was a very silly time to create a language with short- versus long-dates. It was a time when when most Web sites were entirely static, Java applets could not be effectively downloaded over the current average bandwidth, and the quest for interactivity was on. The very name Javascript, which has no relation to Java, was all about marketing to this desperate audience. There is another rant in the works about marketing. I'll change this to a link when I post it.

Regarding Y2K: this was not nearly such a non-event as people (and trade press) who rate everything on an Internet Drama scale seem to think. The fact that Y2K had very little effect was more a measure of the vast resources expended on fixing the problems, and how effective those measures were. It was a huge win, but lacked Internet Drama, so it is now widely regarded as hype. Nothing could be further from the truth.

Second Point

The existence of (JavaScript: The Good Parts).

The capsule description reads:
"Most programming languages contain good and bad parts, but JavaScript has more than its share of the bad, having been developed and released in a hurry before it could be refined. This authoritative book scrapes away these bad features to reveal a subset of JavaScript that's more reliable, readable, and maintainable than the..."

O'Reilly has arguably done huge damage to Internet security from their beginnings, when they coined the term 'LAMP'. Linux, Apache, MySQL, and PHP. The latter two components of which have not been, shall we say, filled with bliss, over the years. But love them or hate them, they do publish important titles, and this was one of them.

So, Is Everything FUBAR?

In broad strokes, yes. Things are generally FUBAR, in the general case of the overall security landscape. It has never really been otherwise. This is not necessarily true in every case.

There are instances where large Javascript libraries are deployed, unvetted, for no better reason than skinning a Web site. I will note that the ability to choose your color scheme seldom has anything to do with color-blindness issues, which would at least be a usability win for a surprisingly (to me, at least) common problem. OTOH, other libraries are deployed for reasons that are far more important than skinning (think financial institutions), and where vetting is just not done. The median is probably somewhere around MathJax, which is non-frivolous, is not widely deployed in sensitive consumer-facing applications, and is just cool as hell.

But history demands that we presume the worst case, and we need rock-solid analysis tools, the output of which we can walk up the management approval loop.

To Return to the Theme

Blanket statements are deserving of suspicion. They are probably a good reason to stop reading any Internet content, whether from a mainstream news outlet or social media. If you see statements beginning with, for example

{everyone|no one|we all}

and ending with (again, for example)


there is likely to be a problem with the content. It may be a simple lack of critical thought, but it could also be the advancement of a hidden agenda, for corporate, political, or other purposes. Propaganda, IOW. Marketing. Or perhaps you are only paying attention to fora exclusively populated by people who believe exactly as you do. Which is the group-think problem, taken to the limit, and one of the problems that the Internet has delivered to all of us.