Monday, August 25, 2014

Optimize For the Exploration of Ideas

This applies to office/lab/shop/whatever design and maintenance, and the idea is simple:
minimize the time and effort overhead in moving between the inception of an idea and starting to develop it.

We all have different tolerances for this, and it varies by the value of the idea, and the immediate impact it has. In my case, sometimes a Post-It note will do. Other times, I need to start Right Now, when my brain is on a fast boil.

I have found those fast-boil cases to be not only the most useful, but the most fun. I ignore those famous quotes about a clean desk being a sign that you aren't doing anything interesting or useful. 

Being unable to find either a tool or information, having to clear either a physical or electronic work space, or anything similar to those things, can be a huge loss. Staying organized is a constant battle for me, but every minute or dollar I have ever spent in the effort has proven to be a useful investment. Seriously. In my experience, there have been no exceptions to this rule, even when the time, effort, or money required has been large. When the resource expenditure required to clean things up is largest is exactly when you get the most benefit from getting things sorted.

I want to be able to find a file (physical or electronic), a drill (as in the physical tool), space to deploy a new instance of a server or application Right Now. I optimize for the exploration of ideas.

Friday, August 22, 2014

Non-Slacker Friday--While Stupid and Lazy

This morning I woke up with a room-temperature IQ. Heh, it happens to the best of us, so it certainly happens to me as well. Luckily I've also been incredibly lazy all morning, which has (hopefully) prevented me from energetically doing stupid things. This has nothing to do with Friday, per se, as I have no traditional fixed schedule in which a Monday through Friday work week is followed by a weekend. Like everyone else in IT I am, for better or worse, interrupt-driven. This is hardly limited to security workers.

So, that whole waking-up-stupid-and-lazy thing is more in the nature of something that just happens now and then; I form no hypothesis as to why. Still, there are a couple of things I can write up that (again, hopefully) do not require a great deal of thought. Both are related to community service, which of course take many forms, from the purely physical, such as my telling a neighbor yesterday that there was a family of river otters (two adults, three kits) playing in the Willamette River behind casa de Greg (great fun to watch—search YouTube if you doubt it) to the virtual or professional.

Upon those River Otters hangs both a tail (I thought they might be beavers until I saw one) and a tale. The tale is about virtual and/or professional communities, databases, SELinux, and how I came to see them. It goes like this.

Very early Wednesday morning, I had a rare summer power outage. Given the timing, and the number of sirens I heard a short time later, it seems likely that someone hit a power pole. This wasn't an immediate problem, as I was on a Linux workstation protected by an APC UPC. Calibration data and a bit of testing led me to expect between 30 and 40 minutes of life, under reasonable loading, to save work, write whatever notes were required to maintain mental state, and do a clean shutdown if necessary. Given my power-pole hypothesis, this seemed likely, and I could track UPS state as remaining time faded via a trivial bash script.

$ cat apcrpt
# apcrpt: Quick look at the APC UPS.
apcaccess | egrep "(STATUS|LOADPCT|TIMELEFT)"

Here is a sample run, taken a moment ago, under a very light load:
$ apcrpt
Fri Aug 22 10:45:24 PDT 2014
LOADPCT : 8.0 Percent
TIMELEFT : 89.5 Minutes

When I became uncomfortable with remaining time I shut the system down, and walked down to the river. Hence River Otters and, as luck would have it, turning an annoyance into a Very Cool Thing.

Note the disparity in that moments-ago look at TIMELEFT and what I usually anticipate. It comes down to this workstation usually having a database server running when I am working, with databases being of varying criticality, from the completely trivial to recreate, to a couple of others which are somewhat to vastly more more likely to cause me potentially large problems in the event of data corruption.

It is those more critical databases which prevent me from running the db server at all times, even though there are ample system resources to do it, and it would be most convenient. See

A bug in SELinux prevents a complete and clean shutdown of both the UPS and the workstation, which is my minimum requirement. I reported this in May, and there is no fix as of yet. It seems likely to also impact UPS hardware lifetime, as it can drain batteries completely flat. Which is another reason I wish it were fixed. Absent a fix, I manually start and stop the db server, which is not an adequate work-around.

Hardware lifetime issues aside, running databases on systems with unreliable power is a recipe for potentially disastrous results, which can make hardware expenses trivial in comparison. It is somewhat ironic that so much attention has been devoted to making cluster solutions robust in the face of node failure, but seemingly very simple things can fall through the cracks. Not that I said 'seemingly'. This might be a complicated issue. Worse yet, it might be complex.

But Wait, There's More

The next SELinux bug is It's a bit weird in that when I tried to report a bug against policycoreutils-sandbox, Red Hat Bugzilla didn't recognize this as a valid component. More experienced bug reporters have doubtless run into this problem, but how to deal with it has not made it into anything that is easy to find.

My concern was that this is about Chrome; sand-boxing Google Web browsing technology. Yes, Google has made much of sand-boxing as a native security technology. But skepticism is one of the traits of security people. First off, sand-boxing has a terrible track record. The technology is getting better, but it's not yet a reliable technology in any context, and it has to operate in a very dangerous environment—running foreign code in a sensitive environment.

It is appropriate to mention that Chrome was insecure from the day of that it launched, out of the blue, on 9/1/2008. As I reported at the time, it was based on an old and vulnerable version of WebKit, and sure enough, one day later ZDNet reported Google Chrome vulnerable to carpet-bombing flaw Uncritical, fannish attraction to any particular Web browser is something that really should be discussed in any modern security training program. So please do that.

There is Still More

So far, this has about contributing to the community of Linux users. That is a useful thing to do, but there are other communities, such as professional organizations, such as ACM. I am not going there today, though I mentioned it earlier. Its complicated, the implications are important, and this is already getting into the area of a thousand words. This is quite enough for a lazy day.

Saturday, August 16, 2014

Early Saturday Morning

Starting at 0130. That's 1:30 AM for you people that don't use 24-hour clocks.

I have a habit of mentally filing nagging problems away, to sleep on them, so to speak. That involves obvious scheduling problems, as sometimes they get slept on for days or weeks. A less obvious problem is that sometimes a solution, or at least the next step toward a solution, prefers to wake me, rather than present itself in a nice orderly manner, when I wake up as usual.

I am fine with that, in that it feels like my subconscious just told my waking mind, “Allow me to surprise you with this delicious cookie.” However, eating the delicious cookie can be a lot of work. In this case, I didn't get a solution, but the next step. Five hours invested in writing some exploratory code, which looks promising, and I was about at a natural stopping point. No solution, but I'm confident that I have the next step. So, it's a win, even though it has messed up my weekend a bit.

It's just as well that I was at a natural stopping point, because the sleep rule on my phone expired, and notifications happened. Most were private, or of no possible interest to you, or both. But before I go sit on the beach (it is Saturday morning, after all) I'd like to point to a G+ post from
which points to Top 10 mathematical innovations at 

The comments are interesting. I tend to agree with the first one. “This article takes a very narrow view of what "mathematics" means.” … “But this list virtually ignores the past 264 years.”

First off, the context is missing. Was this an innovation mostly important for the field of mathematics, the usefulness of the innovation to society, or what?

Geometry is not on the list, though non-Euclidean geometry is at #7. But geometry was important to ancient Egyptian civilization (building, surveying, etc.), ancient astronomy, etc.

Statistics is not on the list, though it has enabled huge advances in modern manufacturing, through statistical process control. It's a vital component of modern science and engineering. It has also enabled politicians and marketers to lie in innovative ways, to such an extent that I have come to believe that statistics should replace trigonometry, or at least be on offer, in US high-school education.

And now it is time to go sit on the beach for a bit. Have a great Saturday, or at least a cookie.

Friday, August 15, 2014

Non-Slacker Friday: More Thoughts on Continuous Audit

Back in January I posted
SSH keys: An Argument for Continuous Audit SSH keys: An Argument for Continuous Audit. I hope you take a moment to read that (only 467 words) post.

However, that was a very specific example of the usefulness of continuous audit, and I would like to generalize that a bit.

For a number of years I've written software (fp: fingerprint) that attempts to evaluate the security posture of Linux systems. That has mainly been focused on the Red Hat family, with a digression into SLES. It has recently come to my attention that some truly ancient versions of fp are still being run, long after it should have been updated, or simply retired.

Admins should have been shouting about this, and likely were, as some were extremely professional. Be that as it may, there is fp code, still in use, that does things like call 'ifconfig', which ships with the net-tools package. If I query a recent version of this package, I get:

The net-tools package contains basic networking tools,
including ifconfig, netstat, route, and others.
Most of them are obsolete. For replacement check iproute package.

That is just one example; there are many more in some of these ancient versions of fp, pertaining to storage, sudo configuration, SELinux policies, modern pre-linking, etc. It is illustrative in that this stale software cannot accurately record bridges, network devices which are configured but may be up or down, IP6, etc., when run against recent versions of the OS. Given the numbers of enterprises which have been breached via network connections that had fallen through the monitoring cracks, that is important.

Some of this old code is running in PCI-DSS environments, which represents an enormous risk that may be not have been correctly evaluated. For a possibly worst-case scenario, you might read
Target Provides Preliminary Update on Second-Quarter Expenses Related to the Data Breach and Debt Retirement. I say 'possibly' because greater losses are only a matter of time, if history provides any guidance. It is also useful to realise that this 2013 disaster is on-going, nine months or so later.

Am I Just Peddling My Services?

I have no idea, and am acutely aware that this is not an ideal business plan. It is certainly a component, as we all have to make a living, and consulting does that. On the other hand, I ocassionaly become so frustrated at the whole sorry state of affairs that I am tempted to open source all of the fp and hardening code. I am not reaching enough people to make a difference, and making a difference matters. Another benefit would be associating with even more great people. That matters too.

Log Analysis (Still) Matters

fp and harden have always written log files, and all but the earliest versions have been able to store data in a manner that was amenable to centralized analysis. This has never been perfect, but it seemed to me that it met a certain standard of usefulness. I won't quote statistics here, because I have already done that, in July, 2013. See We still fail at log analysis.

But I will agree with common knowledge in the security community: logs are written, but seldom read or analyzed.

I would go one step further: they are even more seldom verified, and analysis almost never includes tests for sensitive information that might be written by programs executing in a 'debug' mode. That last bit is often exacerbated by log files often being poorly protected.

Sunday, August 3, 2014

351 Words of Reasonable Text Written Today

And it took essentially all of Sunday to do it. Mostly that was about making certain that the results I was attempting to document were reproducible, and on what systems.

At some point I am going to have publish some sort of About This Blog post. Is it a Dear Diary sort of Day-in-the-life sort of thing? I am leaning that way, because I am not really planning to make a living as a casual writer, in security or any other field. To do that you would have to be a really talented writer, which I am not. Duh.

That said, I am capable of spending a day crafting the first 351 words of something that I believe to be both important and closely-reasoned. Is that valuable, in and of itself? No. For example, contributors to tech books famously make nothing, directly. It's more about marketing a name as a brand, a line item on a resume, etc. Which is not to say that those authors are not doing tremendously useful work.

However, readers should probably realize that there are things going on that you will never read about here; fubarnorthwest makes little financial sense.