Tuesday, May 27, 2014

Are Single Pane of Glass Management Interfaces Really Possible?

They are widely touted, almost to the point of buzz-phrase-de-jour in some circles, which naturally makes them suspect to me. There is ample evidence of past failure in the case of configuring systems, or services run on those systems, via a GUI, where there may be hundreds of options, and optimal selection of many may depend on the configuration of connected systems. Simplistic host-based firewall configuration tools are a good example of that, though there are many others, such as Samba configurators.


There have been so many examples of failure over the years that I wrote it off to George Miller’s magic number 7, and decided that this was not really possible. Some recent exploratory work I did in writing configuration tools for another project (not something I can talk about just now) convinced me that this was an error, within certain bounds, and that this was something I needed to explore. Then, in the course of writing up some project doc, which is often a rather involved process, I found 

So, yes. Possible. But possible does not equate to easy; any sane discussion of difficulty is always about context. Let's stay with that host-based firewall example. How do we offer guidance on how ICMP might be used by an adversary to characterize network topology and the security posture of hosts within that network? Given that allowed ICMP messages are sensitively dependent on the nature of the network?

'Wizard' approaches have famously failed at even simple tasks--search on the roundly-hated Clippy. Nevertheless, an advisory system of some sort would seem to be a basic requirement of any commercially viable software, though it still seems unlikely to supplant domain knowledge—network admins who thoroughly know their networks. Such an advisory system would have to be network-aware, rules-weighted, and testable. That 'testable' bit is particularly hard to do in this context, and if it isn't testable, it is of unknown reliability.

You might consider such an advisory capability as an expert system, or an AI, depending on your background. One thing is certain: this would not be an easy system to create. Development costs would scale polynomially (not really exponentially, though the difference seems unlikely to matter in practice) with capability, in a classic combinatorial explosion.

It seems likely, then, that reasonably effective 'Single Pane of Glass' management interfaces are indeed possible, at least over a narrow scope. However, expect inescapable constraints on breadth of coverage, quality of suggestions (these are baked-in by mathematics), and to a much lesser degree than I had previously thought, ease of use. Interfaces which promise more breadth of coverage seem likely to disappoint, at an inverse polynomial rate, to scope of claims.

While I see no evidence of emerging Magical Admin Tools, it seems probable that ease of use barriers to configuration systems are surmountable via well-crafted software. Scope will be key, and as always, careful evaluation of proclaimed capabilities, tested against your actual needs, is indicated.



Monday, May 26, 2014

Bogus Vendor Graphics

Warning: this is just a log, in the spirit of What I Did Today. Just messing around a bit on a morning of a long weekend.

The background was that I saw yet another impressive graphic that spelled yet more security-related doom and gloom. I am, at best, only cautiously optimistic about the future of {information | systems | personal} security.

That said, this graphic (from a vendor) did not survive even casual long-weekend-morning-with-coffee-and-bagel thought about how the data used in creating it might be biased. I counted eleven issues, and that was with no knowledge of their methodology, because that wasn't described, other than in marketing terms.

Out of curiosity, I took an entirely unrelated data set, accumulated from March 2010 to present time, and attempted to duplicate the spirit of this graphic. I came way closer than I would have liked, because I know that data set has at least 20 sources of bias.

How do I know that? Because I accumulated the data myself, and thought about sources of error as I did it, as the effort involved, over several years, is measured in hundreds of hours. The data source was local bird counts.

You might want to keep this in mind in your next purchasing decision.
















Thursday, May 22, 2014

How Did the Peter Neumann Webinar Turn Out?

Overall, it was entertaining.Peter Neumann has been doing this since 1985 or so. So he does have perspective, and that is important.

One point that he made was that nothing much had changed since the 1985 Risks Digest. This was one bit where the talked head format failed v the old audio and slides format. No insult intended--those were the terms used in the survey questioned presented after the presentation. As little as a single figure, describing a modern classification of what was seen in 1985 could have transformed annecdote into evidence. That would have been a very useful thing.

There were many several subsidiary points. Formal methods in cryptography are a huge win. Most software was fully capable of falling over on it's own, without manipulation by attacker. We still lack any robust means of reasoning about large systems, because we have no useful theory of composability.

It was about then (45 minutes in) that his Comcast connection fell over for about six minutes. As he had already spoken about networking and cloud, this was Delicious Irony. But still a bummer, because I was hoping to hear more about that hugely important topic.

Ideally, hearing people with this sort of perspective force you to think about the foundations of your field. In my case, it lead to a bit of Web searching, thinking about why, in 2014, so many software systems or so horrible. In some cases, for large governmental systems, we already know the reasons, but keep falling into the same traps. As an Oregonian, I have to point to The Oracle debacle which wasted over a hundred million dollars.

But, in the greater scheme of things, this is a side-issue. We know how to not make these mistakes, even if avoiding them sometimes becomes lost in a beauracracy, at great public expense.

We do not, on the other hand, have a great idea of how to write software that does not suck. Peter does offer one annecdote (yes, it is a howler) of one James Gossling showing up at SRI, and proclaiming that there is this new language, called Java, that makes insecure coding impossible. We all know how that turned out.

Wandering around the Web, thinking about foundations, I found something interesting. JD Glaser, a LAMP stack games developer, in a guest post on the WhiteHat Security Blog (a recommended source) , indicts the tech book trade press for teaching insecure coding. Educators might find problems with his arguments. Perhaps *always including secure examples* might obscure the point of a lecture. But there is no denying that a huge population of coders depend upon tech book books from Wrox, O'Reilly, etc.

Read his arguments. Remarks from 1985 are very clear on one point--we need to up our game.


Friday, May 16, 2014

Friday Frivolity: Throwing Out Tech Books

I have about 40 feet of shelf space available here at Casa de FUBAR, about half of which is devoted to tech books. Before you even ask: yes I have books on ACM and O'Reilly Safari.

This is not enough shelf space. I really hate to throw rather expensive tech books out, especially as some of those decisions may come back to haunt me, given what I do. There is a lot of legacy stuff out there in the real world, and an old copy of Sendmail Administration (Sendmail? Argh.) might be just what I need, out of the blue, next Wednesday.

One of the things I have found myself doing is bookmarking book reviews to decide whether to buy a title, and deleting them on purchase. Done deal, purchase made, and the bookmark collection is quite large enough already, thank you very much. But that turns out to be more stupid than a very stupid thing, so I don't do that any more.

Now I run on two rules.

  1. As I look through my shiny new purchase, I evaluate my shiny new purchase against those reviews. Essentially, this is reviewing the reviewer. Once burned, twice wary.
  2. Does it suck, but you still have to keep it because of business reasons?
Decide fast, because most tech books become obsolete at the speed of tech change. If you have to keep it, but it doesn't deserve shelf space, archive it (under an index system). If it doesn't even deserve that, sell it on; someone else will have a different opinion. 

If you cannot even sell it on, grit you teeth and recycle it. Price and value are entirely unrelated, and the wrong call on that $65 book is not going to become more sensible as that shiny new title gathers dust in valuable library space. Throw the damned thing out, and make a note of the reviewer who recommended about it. Unless it is some random Amazon review, which you should not have paid any attention to in the first place.










Sunday, May 11, 2014

I'm Pimping a Webinar, of all things--But it's Peter Neumann on Risk

I am doing a lot of weird miscellaneous work this weekend because I expect to be out of the office for most (or all) of Monday and Tuesday. My calendar does not show anything for 5/22, which is a Good Thing, because it let me plug in an ACM Learning Webinar. What's that, you say? A Webinar? We hates us some Webinars.

But this is Lessons from the ACM Risks Forum. The Presenter is Peter Neumann, who is pretty much a
bottomless pit of qualifications, and if you don't read the Risks Digest (Forum On Risks To The Public In Computers And Related Systems, ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator) you might consider it.

It is available to the public; you do not have to belong to ACM. If you can't make it, you might want to register anyway--they become available on-demand if you do. At least I think they do. I have been an ACM member since long before they did these--possibly that feature is only available to members, but I don't think so.

You have to love a guy who has a favorite meta-limeric:

There once was a man overweaning
Who expounded the meaning of meaning.
In the limelight he basked
'Til at last he was asked
The meaning of meaning of meaning.


There is no meta- in my favorite limerics, and they mostly start with something like:
There once was a man from Nantucket
And they rapidly become NSFW. I can't even aspire to meta-limericdom.

Neumann has pointed out some important issues for a very long time. I don't know how many of you have read The Clock Grows at Midnight (1991), but that was one of the several things that taught me the importance of time. And not just from a log correlation standpoint. I have long said that time services are one of the most important network services, but this is slap-you-in-the-face important, from quite a while back.

In Colorado Springs, a child was killed and another was injured at a traffic crossing, when the school-schedule-dependent computer controlling the street crossing did not properly receive the time transmitted by the atomic clock in Boulder.

Design flaws in safety-critical embedded systems can have tragic consequences. In this case, it seems quite possible that the flaws extended to hardware. But then it often does, in the embedded world.

We are obviously not growing less dependent on accurate time-keeping, and things like supplanting classic Unix-y cron with chrony are important enough that the implications deserve thought*. Thank you, Peter Neumann. BTW, the first reference in that paper was to Leslie Lamport--Synchronizing clocks in the presence of faults. Another long-time and influential contributor, which is why I was extremely happy to post Congratulations to Leslie Lamport, winner of the 2013 Turing Award back in March.


* No, I am not knocking chrony; consideration of implications is not synonymous with Don't Do This.