Wednesday, October 22, 2014

Bash: Useful, But Do Not Do Silly Things

One of the first priorities in responding to Shellshock, by vendors, security staff, and admins was to sort out the problems with Web servers. I don't think that was handled particularly well by vendors, but that would be a whole different post; this is about software development.

Some developers routinely do some insanely dangerous things. Here's a Start Page search on web server written in bash. Your results will vary with the time you hit that link, but I find some horrible text.

  • "Bash-httpd is a web server written in bash, the GNU bourne shell replacement. Why did you write it? Because I could. :)"
  •  "A web server written in bash. Contribute to bashttpd development by creating an account on GitHub."

No, i do not want to contribute to your project, nor do I want to create some random server, in an entirely inappropriate language, just because I could. Because that would be FUBAR; this isn't a matter of discovering that your Web server calls bash behind the scenes, and scrambling to recover from the problem. This is writing the whole damn thing in bash. To be fair, some of those sites may be plastered with text that says, roughly, "Don't deploy this, because that would be FUBAR."  Because, security considerations aside, a network listener written in bash is going to be horribly inefficient.

Sadly, I have encountered at least one in the wild. It's a Real Thing. and I have to wonder how many of them also proudly proclaim it in HTTP response headers. Because, you know, some times it isn't enough to be horribly vulnerable. You have to be easily discoverable as well.

That may be insufficiently disturbing, so have a nice 4:22 minutes on YouTube.



As Usual, I Can't Say Much


Because "I am working with example.com, trying to fix their horribly exploitable systems" is not really the done thing. That is so very Duh, but some people seem to expect it. Not sorry to disappoint. While it would be nice to put up a couple of plots in the future, right now I will have to stick with some generalities.

As background, I have written and maintained some bash software that was entirely too large. That was an artifact of being involved with Linux since it was a New Thing. If you were only just moving onto the platform, and did not have a lot of in-house expertise, there was a large temptation to mandate that all code will be in bash. You just spent training dollars so that your admins can maintain init scripts, etc. I've had some responsibility for writing supporting training doc, so I have a pretty good grasp of how that situation can evolve.

As per usual in the enterprise, inertia sets in, code bases become bloated and difficult to change (much less rewrite in a more appropriate language) without serious effort. This is, of course, a common problem no matter what language is initially used, and leads to creaky legacy systems, and mounting maintenance costs. Nothing unusual here, save that shells are worse starting point than usual.

This is still an easy trap for me to fall into, so don't let it bite you as well. My first instinct when exploring new log files (assuming they are text) is to go to a command line.The shell, and native tools, is a fast way to get an initial look at the problem, especially if you redirect or tee output into result files. You can understand the nature of the problem very quickly, and the trap is the ease with which you recycle those exploratory efforts into something long-lived. Boom: instant legacy code. It was never intended to be performant, and will now waste system resources roughly forever.

This is important. I first mentioned it in July, 2013, in We still fail at log analysis, which is about some 2010 results, but is still entirely relevant in 2014.

Are Code Analysis Tools the Answer?


There is no single answer, but they can certainly help. One tool that I use is purely home-brew, and evolved so long ago that I don't remember origins. It certainly predates this 2008 bug I filed against the kate editor in KDE: alerts.xml is poorly ordered, insufficient, and contains a bug (the KDE team did a better fix than I requested, as they also alerted on 'deprecated', which I didn't think to include). Probably by several years. 

The latest version of the tool reports on everything kate highlights, number of comments, lines of code count, and the rest of the things you would expect. It walks scripts calling other scripts, and reports the number of chained files. Those last bits are important, because in larger bash code bases, scripts calling other scripts is a common behavior. It's almost required, for maintainability, but as soon as you do it, you are probably passing things around in the environment, and we have just had a painful lesson in what that can lead to. So I have added a few things to it.

Overall, it is becoming my rough-and-ready guide. to when something needs to be rewritten in a more appropriate language. But it suffers a common weakness of home-brew tools. It won't alarm on things I would never do. Such as writing a network listener in bash.












No comments:

Post a Comment

Thanks for your comment; communities are not built without you.

But note than comments on older posts usually go into a modertion queue. It keeps out a lot of blog spam. Weird links to Web sites hosting malware, marketing nonsense, etc.

I really want to be quick about approving comments in the moderation queue. When I think I won't manage that, I will turn moderation off, and sweep up the mess as soon as possible.

If you find comments that look like blog spam, they likely are. As always, be careful of what you click on. I may have had moderation off, and not yet swept up the mess.