Thursday, December 26, 2013

Santa is as trustworthy as a doctor

It's all about trust, right?

Here is something that Sabine (author of Backreaction--one of the physics blogs I link to) shared on G+. Santa is as trustworthy as a doctor.

Sunday, December 8, 2013

Ops: the A in Confidentiality, Integrity and Availability

Edit: there is too much command-line and storage stuff in here. If you can wade through it, great. If not, my bad. I'm not certain how I should segregate very technical information, and I welcome your feedback.

Got a call to recover a developer's drive on Fedora 18. Apparently there were problems on /home, which are easy to fix, but also /. Yes, I get a lot of weird little jobs. Apparently this person had Googled around, and independently discovered that the Web needs an editor.

First off, Fedora Live media is inherently a rescue disk for problems like these. Boot into it, look around in /dev/, and you will see the devices related to your system. They will not be mounted, so you can run fsck. fdisk -l will show you what is where.

In this case, /dev/sda1 was a generic Linux filesystem, and sda2 contained logical volumes named lv_root, lv_home, and lv_swap.

There are various things you can do with badblocks(8), including a non-destructive read-write test, but it is horribly slow. 'e2fsck -fc' will force an fsck, (journaled filesystems can be marked as clean when there are problems) you get a progress indicator, and bad blocks will be cordoned off.

Run this command against any non-lvm partition, and each logical volume within a partition, with the exception of swap, whether it is a logical volume or not. Swap is raw disk; there is no filesystem to check, and fsck will simply abort if you attempt it.

In this case, several problems were revealed. In this case fsck was allowed to fix all of them except for a short read on a critical userland file. A backup existed, so fsck was allowed to simply delete the file (fsck will prompt you if only the -fc options have been supplied).

Root cause was Who Knows. It could be down to a cosmic ray, and the system not having ECC memory. But that's not a reason to blow it off. I could recommend that the owner of the system pay more attention to smartd reports, except that evidence is beginning to show that smartd has little predictive power.

This not yet certain, so we still have to gather data. But it looks like it may be an Oops.

Worse is that all of this a waste of devloper time. fsck -fc will take something like two and a quarter hours per GB of disk on SATA-3, it is not an exhaustive test, and now you have a devloper with another ongoing distraction.

This almost perfectly resembles a poor outcome. Better to replace the drive, reprovision the OS (if that takes more than five minutes, You Are Doing It Wrong), and restore from backup. Distracting developers, when there is an easy means of avoiding it, is never a good plan.

Be sure that replaced drives are physically destroyed or securely wiped (which takes long enough that destrction is cheaper). I once requested additional storage for a Linux workstation, loaded an NTFS driver, and was astonished at what I found. This should be in your ops policies.

Thursday, November 7, 2013

Science Meets Marketing

Some time ago, I meant to write a post on why I was linking to physics blogs from a security blog. This is not really that post, but it may serve as a partial explanation. See Science Marketing needs Consumer Feedback, in which Sabine Hossenfelder (whose BackReAction blog is one of the physics blogs I link to) says, "It’s been a while since I read Marc Kuchner’s book “Marketing for Scientists”. I hated the book as I’ve rarely hated a book."

I could not be in more complete agreement with Sabine. I encourage you to read her post, and Kuchner's book. It is entirely relevant to a security blog, and I would like to present a single representative argument for my position: the Georgia Institute of Technology Emerging Cyber Threats Report 2014.

This report touches all the bases of marketing. Use of 'Cyber'-whatever is indicative, but not the least offense.  Almost all references (and this is from academia) point to news outlets of one sort or another, even where there are academic papers that could have (and unless you follow the literature, you would not know this) been referenced. In fact, no papers are referenced, even when they exist; apparently the author regards CNN as a more interesting or reliable source of knowledge. Repeated use of 'He said', 'randomperson said' quotes, always in reference to GeorgiaTech staff, sounds much like a press release.

This is marketing, pure and simple, and straight from the halls of academia. It is moderately informative, but if you are not involved in the daily business of systems and information security, the references provide no reliable guide to separating the wheat from the chaff. 

If I were determined to find an appropriate leading graphic, it would probably involve Gandalf waiving at Elves or something.

Saturday, October 19, 2013

Ultimate Adversarial Code Review

Some people hate day-to-day code reviews. I tend to welcome them, and miss them on engagements where I am going it alone. Yes, there is a certain sense of freedom, but some things I miss.

  • I will almost undoubtedly learn something
  • You can identify people you want to work with (reviewing the reviewer)
  • They can save me from an embarrassing 'burning tree' scenario
Sometime politics enters the picture. That is never pleasant, unless you are a politician. I can do politics, but I tend to add fees when dealing with adversarial, politically-charged environments. In short, it is a complete pain in the ass, and I charge more if I have to deal with it on a daily basis.  

That brings up an interesting question. In the limit, what might an engagement that is *all about* an adversarial relationship look like? I have limited (but not zero) exposure to this environment. For instance, it's possible to invisibly (to the user) pre-load objects via Javascript which will then appear in the user's browser cache.

In the context of something like a patent fight, I have an excellent idea of what tools I might need, and how to employ them, but no experience. On the other hand, I know of someone who does. Avi Rubin has a security track record dating back many years, as USENIX members know. His credentials are available at Professor of Computer Science and Technical Director of the Information Security Institute at Johns Hopkins University, and it goes back from there...

Avi has spun up another company that specializes in this sort of thing, and has a practical guide on how to procede: This is highly recommended reading.

Tuesday, October 15, 2013

Is IPv6 Coming To Your Network?

A year or so ago, I recommended a couple of things to a Network Security Guy (and a friend of mine). First off, have a look at R. I think I dealt with that one (Choosing Python over R) earlier today.  But this guy also believed that Intrusion Detection/Prevention systems are highly effective. Probably because he implemented one years ago that met the needs of the times. Evolve, plz.

I didn't then, and certainly don't now (attacks only get better), have any confidence in IDS/IPS, and also recommended that he have a look at Evader. Apparently, a year down the road, he had done neither of these things. So be it--this was just a breakfast conversation with a friend, and really none of my business.

Today, I had a conversation with another Network Security Guy (and another friend of mine) who had a serious need to vent. For reasons unknown (but it seems likely that some senior manager had read some scare-piece about IPv4 number exhaustion), a mandate had come down from on high, that IPv6 Would Be Implemented in 2014. Behind the firewall.

Heavy sigh, and all that. But what this guy, who was in more of a managerial position, was concerned about was that it was going to thrash his tentative 2014 hardware budget planning, which was already late. Well, yes. It surely will thrash his hardware budget planning; IPv6-capable network infrastructure requires more horsepower, and economies of scale are not yet present.

Here is the FUBAR bit, and I felt bad about dropping this one him, because I had worked with one of his Evil Minions on setting up some basic defenses, years back.

  • Yes, your hardware will cost more 
  • IDS/IPS systems will be even less effective that they were under IPv4
  • Internal network scans will no longer be effective
  • Pretty much nothing of your current notions of VPN security will (or should) survive
This is off the top of my head. I could get creative, and start thinking about, for example, PBX and video-conferencing systems.

IPv6 is coming, but there is already sufficient pain. It is likely that every aspect of your network security posture will have to reevaluated, via whatever risk-analysis methodology you prefer. What is certain is that a valid reason for deploying IPv6 almost perfectly does not resemble management by proclamation.

We know that waiting until the last moment will only make it worse, and that proclamations lead to FAIL. So, please, start thinking (rationally) about it now. You are far more likely to be ready when you have a real roll-out date.

Choosing Python Over R

I feel the need for speed. If you are messing about with decision-support in a security context, you probably do too. It turns out that for most of what I have needed to do in the last couple of weeks, Python has been taking closer me to my targets than R.

It will be a while (probably a long while) before Python tooling can match the comprehensiveness of R, which has > 4k packages available.

For pure statisticians, R is still the win, and I don't mean to trash the tool or the field in any way. If I hadn't found R, way back when, I would probably have thought MS Excel was an acceptable program for stats. Leading to FAIL.

But, Python tooling looks like being faster, in both execution and development speed for my needs of the moment. R may still be the winner in creating interactive doc. I still need to take a weekend and compare the two. But free weekends are in short supply right now, and it would have to be an awfully big win to make much of a difference. I am huge believer in 'go get knowledge, then teach it' but I am not primarily an educator, and tools such as the iPython HTML Notebook seem adequate.

Regarding iPython: don't use anything else. Seriously. The only time I ever enter 'python' instead of 'ipython' on the command line is if I need a quick basic calculator, or if all I want to do is import numpy, do a couple couple of quick array operations, and leave.

Don't laugh at the idea of using Python as a basic calculator (snarking on KDE Kcalc):
import math
Seriously, kcalc, WTF is your problem? People have been able to take square roots on hand-held calculators for 40 years, but your software, running on this comparative pile-driver of awesome, cannot.

With iPython,  you get log files and other huge advantages. Mess around with it for a couple of evenings, and you will never go back. I wished for something like this for ten years before we got it, so now I have been enjoying it. You will too.

Saturday, October 12, 2013

Probably Relocating to Portland or Seattle

Living in the hinterlands of the central Willamette Valley, as someone engaged in the theory or practice of information security is a hard thing to do. Most of the people I need to talk for business purposes are northwards, in Portland or Seattle.

That is not the major part of the problem; even skinny pipes can deal with most bandwidth issues, such as sample data sets. The clients you really want Just Get It. Perhaps surprising, but true. And they are such a joy to work with, which makes up for many things.

One of the things that they can not make up for is isolation. That is a constant threat to productivity for anyone working at home.

On one hand, you are undisturbed, and it's easier to get into The Zone, and be extremely productive. Call it Flow State, or whatever. To me, it will always The Zone.

On the other hand, the inspiration that comes from talking to colleagues is missing. Make no mistake: as a commercial proposition, being the expert rules. But you do have to be demonstrably right, roughly 100% of the time, and there is little that will give you any insight into solving the next client's problems.

For that, you need a research community, and diversity. Or you could take my approach, which involves one hell of a lot of homework, and sunk costs. This can work, in specific areas; I have a track record dating back 13 years or more.

But the security industry, as a whole, has to do vastly better. As I think about host security (secure against what threats? who have what resources?) I want network people next to me. I want rootkit specialists, and people expert in state machines and log analysis. I sure as hell want to have people that have a clue about authentication and authorization.

There is no substitute for a group of really clever people scribbling on a whiteboard. It's a huge buzz of argument, and about the coolest thing ever. I hugely miss the diversity, the frenzied scribbling, and the ensuing arguments.

The really nice bit is that I have enough latitude on the current engagement that I can just do it, if it still seems wise a couple of weeks from now. In essence, I would be moving closer to daily arguing-around-a-whiteboard range, and they are *fine* with that idea.

Wednesday, September 25, 2013

Has done NIST done enough damage control?

We really need to be able to trust NIST, and the Patrick Gallagher keynote did little to re-enable that I didn't watch the General Alexander keynote because my lack of trust in this individual is such that it simply isn't worth the bother.

Michael Daniel is a Special Assistant to the President, and Cyber-Security Coordinator at the White House.

In the first couple of minutes of his keynote, there was a nice mention of Northrup-Grumman, a huge defense contractor, a buyer of 0-day exploits, etc. as the lunch sponsor. I don't want to go into vulnerability disclosure here, save mentioning that the No More Free Bugs argument does have merit -- this is yet another complex issue.

From there the keynote promised to be more spin and politics. I really didn't have the time to spare for this one, as I have no trust in a national effort for identities in cyberspace. So I only half listened to Mr. Daniel.

Here is the lead URL that links to all three keynotes.

I am a bit busy right now, as I indicated in Back to Back Projects. Yes, a very 'duh' title. But that is just one reason that I am so very, very annoyed that I have to be dealing with this stuff right. As a society, we decided back in the 90s that there would be no mandated back-doors. No key escrow, no Clipper Chip. The NSA has apparently just decided that they would do it anyway, behind everyone's back.

I would *really* like to see someone go to prison over this, and that is an entirely non-political desire. This dates back to the Clinton administration, intervening Republican administrations have been at least as thoughtless, and President Obama has never, since before he was elected for his first term, shown much concern for doing the right thing in this regard.

Politics is a dirty business. Always has been, always will be.

And now, back to the salt mines. Some of us actually have to demonstrably help people, on daily basis. To us, the cyber-whatever politics game is, at best, bemusing. For instance, don't get me started on critical infrastructure protection.

Sunday, September 15, 2013

Back to Back Projects

There are a couple back-to-back projects coming up. The first involves writing formal docs, and producing a bit of material related to a slide deck (they could have chosen a better (meaning any) graphic artist) that has to be created related to The Risk of Temporary Systems. These people are not foolish—they want to ensure that they are never bitten by this problem again. They Get It, on a fundamental level. Which makes them a pleasure to do business with.

That's 2-3 days of work, and I will likely sink the latter half of the week into the usual (overhead, but interesting) research, rebuilding more of the lab, etc.

Beginning tomorrow (Monday, 9/16/13) I am on a project that will probably run for two or three months, more or less full time. As usual in this business, both sides are subject to NDAs. In this case, they are highly restrictive. But there are still things I can post. Research and reporting, even if only to an internal database, and an occasional blog chirp, never stops, and I hope you continue to visit.

If nothing else, I'd like to talk about OS updates (patching, and I hate that term), and how patch failure increases exposure.

Friday, September 13, 2013

The Risk of Temporary Systems

Here is an example of the classic 'temporary system' problem, which I have seen in various forms, up to a rogue server that some developers installed on the DMZ. Scenario:

  • Client installs new servers at a low rate--never more than 10 per year
  • Client has an incoming QA procedure that involves a burn-in  (Yay!)
  • Client burn-in procedure is to install an ancient OS, and some scripts that were written years ago  via optical drive (Boo!)
  • Client has no subnet for this, just some reserved IP numbers. (Boo!)
  • Cient just lets systems "soak for a while" (Boo!)
  • Client discovers a temporary system is compromised (Yay!)
  • Me gets money (Yay!)

The Yay count is three, though "Me gets money" probably wouldn't count, from the client's perspective, so let's just toss that one, and call it two. The Boo count is a definite three.

What Went Right

  1. They were sharp enough (or had been burned enough) to not just rack a new system up and place it into production. Infant mortality is real. Look at Google's publications on disk failures or something if you don't think that is an issue.
  2. They spotted the compromise in a much shorter than average period. It's not uncommon for compromised systems to remain undetected for months, so that is a huge Yay!

What Went Wrong

  1. Ancient code. The install rate was low enough that they didn't see much benefit from modernizing how they did this. Though it was a manual process, hence expensive. The irony is that they could have used provisioning/patching automation I'd already built for them, and this would never have happened.
  2. Improper subnetting. Ancient code, if running at all, should have been partitioned away. Particularly as it was just admin stuff, not a creaky old legacy business system that nevertheless had to be widely available, despite the risk.
  3. Procedures that need work. A Post-It note, with a start-of-run date, stuck to the front of a system is not good doc. 


Most of this should be obvious from What Went Wrong. But it's worth stressing that countless problems are caused by organizations not being fully aware of what systems (and their security posture) are running behind the corporate firewall, or even full knowledge of what Internet connections exist. The thought of the odd T3 connection being forgotten about may seem strange to some, but it commonly happens in large organizations.

'Temporary' resources will become a larger problem than it is today. Virtualization, software-defined-everything, and the power of modern provisioning systems ensure this. Compute and storage nodes can be spun up with a few clicks of a mouse, and an interconnect with a few more clicks. The economic imperative is obvious.

There are things that the security community needs to work on, as always, but I would argue that the most important thing that organizations can do is embrace continuous audit.

Monday, September 9, 2013

rageface, NSA, NIST, and SP800-90revised_March2007.pdf.

There's an image I would love to paste in here, but I don't know what the associated rights are, and I am way too busy to research it. Search for 'rageface', if you care. You will come up with several variants of the same image.

Years ago, when I did hardware and metrology, I developed a lot of respect for what was then the National Bureau of Standards, now NIST. NIST still has an important role in much that I do, but these days you have to look at some of their work with a careful eye, always wondering if you are paranoid enough.

I am referring, of course, to the famous 'back-door' associated with SP800-90revised_March2007.pdf, which is related to random number generation. Which is, of course, a vital component in all cryptosystems.

I completely FUBARed this. I have a lot of bookmarks related to that 2006 issue, and some notes related to whether this might be a 'double-think' due to NSA influence, vis-a-vis RSA/DSA versus elliptic curves, but I don't have a copy of SP800-90revised_March2007.pdf. There is no copy in the archives, so all I could do is request one via email.

I will update this post if and when I get any response, with a SHA of what I receive. Because I no longer trust NIST.

Questions related to NSA influence of standards date back to DES. The consensus of the DES discussions was that there was no undue influence. Fine. That is something that historians of crypto can argue about--no sane architect currently specifies DES or 3DES in 2013.

SP800-90revised_March2007.pdf is more recent, and I should have kept better track of this stuff. FUBAR. Under my federal/nist/sp800 directory, there is no 'historic' directory, and there should be. There are groups of files with the same dates, due to my trusting NIST, and not practicing careful backup procedures on a workstation. This is a result of just dragging files around as I've upgraded the system over the years, etc.

I should have known better than to trust NIST due to monthly exposure; they publish a monthly bulletin which has changed file-naming conventions several times, sometimes carried no title at all, etc. And some of the 'advice' is about as useful as what my bank provides in their annual CYA surface-mail inclusion.

This is not the hallmark of a standards organization I should have trusted. So now I have to control file dates, take cryptographic hashes,  etc. I also have to write and maintain software to do it all, because it is too much to keep up with manually, on a daily basis. Because I can no longer trust an organ of my own government. Hence the rageface reference.

This has already caused huge economic repercussions in that several large-scale organizations are now unwilling to host data, or allow data to pass through (that will be tough, given that routing is a complex dynamic system), the United States. But it also has repercussions below the level of multinational corporations. Due to this, and similar issues, my overhead increased. So did the fees I have to charge, and that doesn't make things easier on anyone.

Thanks, NSA, but I am more interested in why no charges have been brought against General Keith B. Alexander (NSA is a military organization, and they have different goals), than, say
National Cryptologic Museum Offers Music and Movie in 20th Anniversary Festivities
though that definitely has its place.

Update September 11, 2013

I plowed through a lot of backups looking for this file, and came up dry.

The sleepless folk at (just coming back online after an extended periodic maintenance evolution) had the file. No, there is no evidence here for conspiracy theorists to get even crazier over. They were just doing database maintenance, and ran into some problems. These things happen. 

But let us put any potential crazies to rest, as best we can. I obtained a copy from While it is possible to forge PDF documents, it would be stupid to do so in a manner that is easily detected, and there are bound to be many copies of this PDF floating around. Possibly some would generate a different hash (it only takes a single flipped bit), but the differences would be easily discovered. The NSA is not collectively stupid, and this would not happen. So here is a hash of the file I obtained from

$ sha256sum SP800-90revised_March2007.pdf 
467100ea1fc8f98d24af3b9203687d828d601dfb6205e0424bbd2c5a40275bba  SP800-90revised_March2007.pdf

A quick note on backups. I looked at media dating back to 1999, though I didn't know it as I was swapping CDs (yes, CDs). The earliest media were apparently randomly burned from a workstation whenever I got sufficiently paranoid about losing work. They were all Imation CD-R.  As much as fourteen years old, and all disks were still readable. Anecdotal, and I wouldn't dream of doing backups as I did then. Or I would never have had to hunt for the file. But I was impressed with Imation media of the period.

Sunday, August 25, 2013

Weekend Security Humor

Because sometimes you just have to post some.

I think it was the sheep carefully lining up that butt-shot as much as anything.

Best Practices: Built-In Security Failure

Years ago, Intel hired me to do hardware-related work in semiconductor fabrication, as part of a group called 'Improvement Engineering' in what was a hole in the New Mexico desert. So, yeah, it needed a lot of improvement in order to become the cleanest clean-room on earth.

We didn't use the term Best Practices which is so prevalent in the compliance (I did not say Security, as they are emphatically not the same thing) industry of 2013, and you shouldn't either. Best Practices implies received wisdom, and slow responses to rapidly changing threats. We spoke of BKMs, or Best Known Methods. The 'Known' cannot be emphasized enough. It implies a seeking, driving, dynamic approach that is often lacking today; it implies currently Unkown Methods, waiting to be discovered by motivated, data-driven people.

Examples of where it has been proven that there can be no better way (from hardware, software, or procedural perspectives) are rare. This is fertile ground. More specifically, it drives Continuous Improvement, and various other all-to-corporate buzz phrases, past and present, into corporate culture.

The Best Practices approach demonstrably is, and has been, failing, by every available metric.  Emphasize that, take a data-driven approach, and reward those who demonstrably improve the state of the Best Known Method.

Saturday, August 10, 2013

Privacy Advocacy Turns Out to Be Common

It is fairly common for security people to also be privacy advocates--it's security on a personal scale. So the NSA/Snowden thing is something I follow. And, while I don't really want this blog to become focused on something so politicized, some additional commentary is in order.

Here is a graphic I found particularly striking.

I am not a fan of Anonymous. The Guy Faulkes masks work, and they have been known to do what seemed useful cult-control. A large amount of media attention was a foregone conclusion, as was a commensurate amount of attention from law enforcement.

I can't approve of their methods, or admire their approach to operational security. If you are going to declare <Operation Whatever>, which often enough consists of a DDoS attack, don't use a tool like LOIC, which reveals the IP number of everyone you talked into the gig. Duh. Law enforcement did it's thing, and anons are being busted left and right. This will continue, and it is unfortunate that so many people were, in the end, victimized by Anonymous.

Idealism always carries a high cost, and it is usually dis-proportionally borne by the young and not yet cynical, so this is not a surprise.

It is a pretty sad state of affairs, as usual. Law enforcement is supposed to do it's thing. That's what we pay them for. If we, as a society, find the idea of the future of our children being ruined abhorrent, what needs to happen is fairly obvious. The law, and government accountability under the law, has to change.

It turns out that there are economic incentives to fix this. So even we cynics have some cause for hope. I'll either update this post, or point to new post(s) with updates. I'd prefer to just update this post.

Friday, August 2, 2013

Things are really busy right now.

I have a new project: documenting what I did, and the rationale for the choices I made, for one of the recent data analysis projects. I always write docs, but this is more in the spirit of a HOWTO for people that need some basic instruction on how data analysis pipelines (or workflows, if you will) are commonly constructed on Linux, and does not depend on humans clicking around, enduring the horrors of statistics in Excel, etc.

It's mostly about pre-processing data, feeding only what you need into a sane database (why give up three orders of magnitude in speed for the bits that will not have relational queries run), when to do matrix math, when to fire a decision-support plot because a threshold has been exceeded, etc.

Somewhat at variance with physics pipelines, it was written in bash, Python, R, and Go. I should do a post about that. But like I said, things are busy right now.

Fedora 17 reached EOL on 7/30/13

Why does that matter? Well, Fedora is regarded as upstream of Red Hat Enterprise Linux. RHEL, and derivatives such as CentOS, ScientificLinux, and Oracle Linux (though Oracle will never admit that). What that means is that Red Hat chooses a moment to grab the current Fedora distribution, and make some of the various bits more robust. Meaning supportable, at sane cost structures. That forms the basis of the forthcoming Red Hat Enterprise Linux. Running a current, or near-current, Fedora provides insight into the oncoming RHEL, which will be RHEL7, by the end of the year.l

This is useful, particularly as this will be the most powerful RHEL ever, by a wide margin. Mondo cloudy stuff is going to be in there. That should be another post; it is by no means all marketing.

But right now, I have to rebuild some lab machines. This isn't a huge deal--that's what labs are for. But it will keep me busy for a bit, because I have to characterize what I am doing.

How do you secure this stuff?

As Frank Zappa once wrote, "The crux of the biscuit is the apostrophe." Secure what? Against what threat? At what cost?

I have never been a fan of PCI-DSS. The standard cannot change rapidly enough to reflect changes in the threat envelope. Compliance costs are out of control, and it is not clear to me that there is any rational means of choosing any particular solution to a PCI-DSS line-item. Sometimes I hate to even talk about PCI-DSS; there are other requirements in other industries that are more interesting (medical record security comes to mind), and some things (design flaws in cryptographic protocols, etc.) apply to any industry.

The basics apply in any environment. Control access, authentication, and authorization, and the majority of your risk goes out the window. This is doable, even via bash scripting. From a Director Information Security at Fiserv (Acumen platform)

"we did get our PCI-DSS ROC and the assessors loved the hardening scripts and the way you listed the hardening steps by control number."

Write a master script that calls subscripts by control number. The downside is that it adds complexity; you will be touching some configuration files more than once. It works, and assessors love it. You do however, need a capable and auditable version control and build system. Git works fine, if you bolt on some additional tooling.

The point is that RHEL7 will offer more controls--you will have more power to meet any standardization, legal, or regulatory challenge.

Wednesday, July 31, 2013

We still fail at log analysis

Recently I've been working a couple of data analysis projects, and writing  
some software in support of that. Much of it has direct application to 
automated log analysis, alerting, and decision-support. While I am still tweaking, 
I have been pleased with those results.

Which is a Good Thing, because we need to be lot better at it than the data 
suggests we currently are. Good data are scarce, but the Verizon Data Breach 
Reports do provide some. Exactly what is reported each year, and the format 
in which it is reported changes each year. To some extent it has to; the 
landscape changes rapidly.

Back in 2010

  • 86% of victims had evidence of the breach in their log files
  • 3% of breaches were discovered by log analysis or review
  • 4% were detected by the combination of event monitoring and log analysis (This is a drop from the 6% of 2009)
  • 30% were in compliance with PCI Requirement 10: "Track and monitor all access to network resources and cardholder data." A better number than the abysmal 5% in 2009

Fast forward to the report for 2012 (published in 2013), where the data 
are again presented in a slightly different way. Overall, detection via logs was 
1%, broken into undefined Small (nothing reported), Large (4%), and Overall (1%).

There was no figure for how many victims had evidence of the breach in their logs, 
but there is no reason to believe it is substantially different than the 86% 
reported in 2010. So it would appear that there is significant room for 
improvement in log analysis.

I think we can all agree that the worst-case scenario is to not only suffer a 
breach, but to have it discovered by an external party. Anyone doing incident 
response is (or should be) aware that the clock is ticking. If it's public, 
there could be a lot of people watching it tick.

Perhaps it's time to look at your log analysis systems again, including a check 
to ensure that the system is inclusive enough. It's common for organizations to 
not even know where all the logs are. The problems can be as varied as that 
they're being written by unfamiliar or misconfigured software, or systems 
being installed incorrectly or surreptitiously. 

If any of that is found, the problems are obviously more extensive than just logs.

Thursday, July 4, 2013

First Actionionable Item From Snowden NSA Leaks

Here it is, the Fourth of July, and I am putting up a post. That is dedication!

Actually, the barbeque is going, and the spud salad and the rest of it is done. Even though the recent Oregon heat wave has broken, I don't want to be sitting next to that barbeque, since I cleverly placed it in the hottest, most uncomfortable place available. I really need to take care of that. I just spent a couple of hours at a state park, I'll be outside most of the afternoon and evening, and I managed to get a bit of sunburn standing in the river behind my house yesterday, which I do not want to make any worse. So I'm good with being in a nice cool office for a bit, and I'll probably finish up sometime later today.

On June 7, I posted 
NSA overreach: is it actionable, or just random news? in which I intimated that there is little that is actionable from the perspective of a security practioner.

Since then, the revelations have continued, and NSA are about as FUBARed as it is possible to be. The popular press is where much of this is coming from, and this issue is not going away. I may talk about why in another post. I do keep up with this stuff, out of professional interest. Still, you might roughly classify news organizations into members of the

generic mass media,
generic IT media,
pop security media,
technical security media,

though the lines occasionally wander. I use many news sources, which I divide into tiers, based on accuracy, level of detail (these are not the same thing), frequency of update, political skew, etc. Efficiently keeping up with security news (and it must be done efficiently, lest it become a full-time job), is a difficult to do well. Note that this does not include research papers, or what corporate white papers have become over recent years. Those have to be read too, but I don't regard them as media, in the same sense as the above list.

Here is something you do not see every day.  Another piece from The Guardian, New NSA leaks show how US is bugging its European allies, the UK newspaper (generic mass media) which was one the first to take this whole thing public, reveals information on attacks against diplomatic embassies and  missions of the EU and member nations.

The generic IT media and pop security media have already begun to lump this entire thing into PRISM. It's easy to remember and search for, which is an advertising revenue win. Without which, they do not exist. Meanwhile, the generic mass media Guardian is more accurate than generic IT media, or pop security media  in their code names for attacks (NSA will have changed all these the moment they were revealed, but they may become useful search terms, if only for students of history).

BLACKFOOT: French diplomatic mission to the UN
WABASH: French embassy in Washington
BRUNEAU, HEMLOCK: Italian embassy in Washington
POWELL: Greek UN diplomatic mission
KLONDYKE: Greek embassy in Washington
PERDIDO: EU UN diplomatic mission

Crucially, they also provide a graphic related  to DROPMIRE, an attack against secure FAX. Specifically against Cryptofax, a product of the Swiss firm Crypto AG. As an aside, there have been rumors and accusations (since confirmed to my satisfaction) since the 1980s of collusion between Crypto AG and NSA. I am surprised that The Guardian didn't pick up on that.

This where things get actionable, in two areas. The first depends on how technically well-resourced your likely adversaries may be in a pure security context.

That image was enough for Dr. Markus Kuhn, of the Computer Laboratory, University of Cambridge to go on. In a convincing post to Light Blue Touchpaper (technical security media published by the Computer Lab) he has convinced me that this was a TEMPEST attack. Another codename, referring most commonly to radio emanations. Specifically, in this case, to monitoring the radio-frequency energy emitted as the FAX machines laser was switched on and off. That may be very much actionable.

TEMPEST attacks have a long history. I am actually a bit disappointed in the EU, and EU member states for allowing a TEMPEST attack to succeed; note that the home of the University of Cambridge is the UK, an EU member state.

The second area that might be actionable depends on whether you are in the midst of, for instance, sensitive negotiations with German counterparts.

In the real world, even friendly or allied governments spy on one another. Despite the public expressions of shock and dismay that you can expect to hear from members of EU governments, they almost have to. At the nation-state level, even friendly or allied governments do not have completely aligned interests (it's almost as if they are different countries or something), and you need to know if a friendly or allied government is about to stop being friendly or allied. Even if it is limited to a single issue, if that issue is important enough. An intelligence agency that gets this wrong will be said to have suffered an intelligence failure (Google that), and will be barbequed. 

In this case, the other guys have a bit of egg on their faces, as the expertise to prevent this was available to them, but wasn't effectively used. Of course, politicians being much the same in any Western nation, they will hope it blows over, and to attempt to cover with indignation if it does not. Or cover with the 'hackers on steroids' defense that was used by so many US organizations who were hacked to the bone by the script-kiddies of Anonymous. This is entirely predictable.

Make no mistake: this is not going away soon, even if the leaks stopped immediately. The politicians, and NSA, will be disappointed. It is not going to blow over, and will feel the need to be perceived as Doing Something, even if it is The Wrong Something. Repercussions seem likely to be large and long-lived -- consider that a federal election will occur September 22 in Germany. Germany is a NATO ally, the leading economic power of the EU, and a justifiably privacy-sensitive nation. Particularly given what came out about the East German Ministry for State Security (Stasi) before reunification. It has been revealed that NSA collected against Germany, and has classified them as a valid target.

Nor is Germany the only trouble spot that lies ahead.

On a final note, I am not defending all that NSA have done; their surveillance of US citizens, and lying to all and sundry to cover it up are heinous. NSA have a history of doing things that are either dubious, or simply illegal, and they need to be reigned in periodically. I am not defending the politicians who failed to do what they were elected to do, though at least Senators Ron Wyden and Mark Udall of the Senate Intelligence Oversight Committee tried.

There is a far greater likelihood of a whistleblower going to prison than an NSA official who breaks the law. We need to fix that if we intend to become a more just society. Possibly some of the more damaging leaks may have been intended to find a sympathetic ear; to find a safe haven after the hue and cry went up from the US government. If that is the case, a trustworthy whistleblower program would have prevented the majority of the damage to our foreign relations that has so far occurred. 

Whether the service that he has undoubtedly rendered to his fellow citizens by revealing the latest NSA overreach event is outweighed by the damage that he has done to foreign relations is for history, and more practically, a jury of his peers to determine.

Wednesday, August 28, 2013 Update

And the NSA has indeed become an election issue in Germany, according to Der Spiegel, Peer Steinbr├╝ck, Chancellor Angela Merkel's challenger in Germany's September general election, called for a suspension of trans-Atlantic free trade Peer Steinbr├╝ck, Chancellor Angela Merkel's challenger in Germany's September general election, called for a suspension of trans-Atlantic free trade talks.

Tuesday, June 25, 2013

Does Cloud Backup Meet CIA Requirements?

For most purposes, I am concerned with the CIA definition of security. Confidentiality, Integrity, and Availability. One implication of this is that if you don't have good  backups, you are FUBAR. And, since Murphy is alive and well, you will discover this at the worst possible moment.

I don't generally like to get into the Big Data and Cloud discussions on public fora, as there is entirely too much marketing noise. But sometimes I know of poor decisions being made, and I just can't resist.

It should be obvious by now that we have to bring analytics code to the data, not the obverse. But there are still organizations that want to sell cloud backups, and couch their 'solution' in big data terms. These are all really fashionable terms and all, but please price a OC-192 connection. That's also called a SONET 10G connection, because it very nearly is 10G. It is amazingly easy to saturate a 10G connection. Really. The server consolidation via virtualization shakeup has not yet played out, and you can ask your down-in-the-trenches network people, and you will likely get a couple of horrifying stories.

I'd be happy to hear what you are paying, if you have one. Perhaps prices have crashed, but it was recently in six figures. That's just the connection, not the storage on the other end. It may be very expensive to move Big Data. Things you might consider

  • rate of data change
  • required storage period
  • required access speeds across tiers (on-line, near-line, archival)
  • any compliance or regulatory issues (PII, PCI, etc.)
  • if the data must be encrypted, how well do you trust key management
In some cases there should be discussions with the legal team and/or the auditors. In more cases it should involve discussions with the bean counters.

If you can do reliable backups into a cloudy infrastructure provider, the financial numbers work, and you feel as if you can trust their service, good for you. But if it were me, I'd have to be very, very convinced before I would forego having a local backup of important data on local tape. If you feel the same way, how does that affect the value proposition of cloudy backup?

Saturday, June 8, 2013

Saturday security humor


  1. Trust can fail in unanticipated ways
  2. It is Saturday, and I am looking at a computer
As a professional security consultant, I recommend that you have a nice day.

Friday, June 7, 2013

NSA overreach: is it actionable, or just random news?

It has been a busy day. The PRISM furor has the popular press in an uproar. I don't find it surprising at all. This sort of thing has a rather long history, and there were indicators that the status remains quo. That doesn't mean that I am unconcerned; once upon a time this sort of thing was made expressly illegal, and secret law is not a path to success in running a representative democracy.

The net effect on my day is that various people are pinging on me for either an opinion, or sympathy for their sense of outrage. IOW, I am busy, and it's an interruption. Ask yourself if this actionable, in the sense that you should immediately change your behavior or policies? In almost every case, the answer to that question will be no. Write your favorite politician (who are most of the problem, not part of the solution, with notable exceptions), or otherwise do what your conscience demands. But please, do not expect security professionals to be in a white-hot frenzy over this. Unless it's a marketing thing.

Many security practitioners are privacy advocates; it's security on a personal scale, and the same principles apply. Others may see it more from the perspective of a vendor selling tools or services. And some are purely pragmatic, wandering back and forth across that line, as circumstances dictate.

The people being quoted in the media at the moment are not practitioners; they are either managers with a large political bent, or purely politicians. What you are reading is about political agendas. It is of little practical interest to practitioners, because it is not actionable. Anything actionable will come later. Probably months or years from now.

In the meantime, there are more interesting things to think about. PKI is still broken, and possible solutions have been brought forward. Java stills hangs heavily around our necks (and I need to write Part 2 of that post) and Adobe Web products are pretty much as bad. The ability of government and industry to share information is still FUBAR, but there things we can do, today. We have no good handle on the problem of incident response (despite what you may read). We don't even handle code re-use particularly well.

That is the sort of thing that is actionable.

So, back into the trenches of real-world security.

Tuesday, June 4, 2013

Java Security Revisited--Part 1

Java Security Revisited--Part 1

I have already thrashed on Oracle Java security failings once, but it doesn't hurt to go on with it. There is a human cost to this, in stolen identities, banking details, etc.

there is a post titled Maintaining the security-worthiness of Java is Oracle’s

This is as FUBAR as anything needs to be.

It is perhaps more appropriate to subsitute 'Establishing' for 'Maintaining' in
that title. Java has a long history of problems. You might want to go read that
5/17/13 post first. The one where I claimed that they had more or less fallen
on their sword, and admitted complete security FAIL.

Back now? Great. Let us start with a quote from that blog post. "Hi my name is
Nandini Ramani, I lead the software development team building the Java
platform. My responsibilities span across the entire Java platform and include
platform security."

This is The Man. The guy with the ultimate responsibility (save Larry Ellison,
who will probably fire him if results are not forthcoming, pretty damned

"Over the past year, there have been several reports of security
vulnerabilities in Java, primarily affecting Java running in Web browsers."

Over the past year? I don't want to accuse Mr. Ramani of being dissengenuos on
a corporate blog; perhaps he has not been keeping up with even not-so-current
events. But let us look at open source intelligence.

I regard Brian Krebs as one of the foremost researchers of the business
mechanisms behind underground fora, exploit pack proliferation, and botnets,
and related matters. He is focused, talented, and has sources I will never
have. Read in which
Mr. Krebs says, "I also found Java flaws to be the leading exploit vectors for
both the Crimepack and Eleonore exploit packs." This was in October, 2010, not
"Over the past year" and the rabbit-hole goes deeper than that.

At this point, I have no good reason to trust Mr. Ramani. I do, however, have
good reason to question his integrity, competence, or both.

Further down in the post, you will find that Java will be updated four times per
year, in sync with other Oracle Critical Patch Updates. This is obviously going
to increase the testing load for those who have to deploy this stuff. But my
focus is on security, and my take is that this should already be built into
your deployment workflow. If hasn't been, in the past, take it up-stream. Your
workload has certainly increased, and you need an increased budget. It's just
part of the ever-increasing cost of doing business with Oracle.

Mr. Ramani even provides helpful reminders. Summarizing:

February 2012 Critical Patch Update for Java SE provided 14 security fixes
June 2012 release 14
October 2012 release 30
(thus the total number of new security fixes provided through Critical Patch
Updates for Java in 2012 was 58)

February 2013 security releases provided 55 new security fixes
April 2013 Critical Patch Update for Java SE provided 42 new security fixes
(bringing the total number of security fixes released through the Critical
Patch Update for Java in the first half of 2013 to 97.)

The way Mr. Ramani probably does not want you to interpret these data is that
these numbers are a measure of how horribly broken Oracle Java security has
been, and how exposed you have been. Increased pressure on Oracle is indicated.
What else is out there, that you do not yet know about?

"The Java team has engaged with Oracle’s primary source code analysis provider
to enhance the ability of the tool to work in the Java environment. The team
has also developed sophisticated analysis tools to weed out certain types of
vulnerabilities (e.g., fuzzing tools)."

Sounds great, and while fuzzing is hugely useful, that usefulness has only one
domain. Gary McGraw, a recognized authority, pointed out in
Software Security: Building Security In, published by Addison-Wesley, that less
than 50% of vulnerabilities come from implementation flaws. Code analysis tools
do not find architecture flaws.

This is getting way too long, and I am not even remotely done. I am going to
have to continue this in a second part.

Friday, May 17, 2013

The current Java is update 7u21. There will never be a 7u22.

Write once, run everywhere. It's secure because it runs in a sandbox.

Eventually, even the largest IT marketing machines cannot overcome IT reality--social issues are a whole different thing, but (mostly) out of scope for this post.

"Write once, debug everywhere," is commonly heard. And we really don't understand how to reliably and easily build secure sandboxes, from chroot jails on up.

But I believe a page titled  Java SE - Change in Version Numbering Scheme to be a landmark. So far as I know, this is the first time that a major vendor of 'Enterprise' software has pretty much admitted complete defeat on the security front. Microsoft came close, several years ago (they have gotten much better), but this is an admission of  abject defeat.

Oracle has never had a good rep, on several fronts. Hardball business tactics and vendor lock-in don't really concern me here. That is an easy Web search away, if you aren't already affected by it. Being widely regarded as the least secure of the 'Enterprise' DB vendors does, but only insofar as it is symptomatic of the current problem. Which is that Oracle has not only consistently produced horrible products, but they have been very slow to fix their problems.

The way Java updates worked in the past was

  1. Limited Update patches (feature adds and and non-security bugfixes) got even update numbers.
  2. Critical Patch Updates (CPUs) got odd update numbers.
They are done with that. There have been so many security flaws that they have had to renumber releases, which causes huge problems for users in risk analysis, patch scheduling, etc. The majority of the problems involved Java running in Web browsers, not server-side Java.

That was the one ray of sunshine, but security teams had to take very careful looks at exactly what was being run, and where. Multiple versions, installed on the same PC, were a problem. Common exploit code can call the vulnerable version if it is available. And some organizations did not prevent Java installation, of any version, regardless of whether it was really necessary on that employee machine. To make matters worse, a common installation scenario involved developer and QA machines. These users could often mount a specious argument that they needed it to do their jobs, and would be given blanket permissions. 

It was, and is, completely fubar. It is a really great way to make an adversary's job easier.

So, Oracle has at last bagged it. How have they admitted complete defeat (though not in so many words, of course)? Like 1970 BASIC programmers, leaving space between line numbers. 

Limited Update patch numbers are assigned in increments of 20. That leaves fill-in-the-blanks room for CPUs that will be issued as odd numbers that are increments of 5. Leaving still more fill-in-the-blanks room toad more fixes, and/or fix their previous fixes. And they reserve the right to use even numbers in those CPU intervals. I would guess that they are expecting more than a few.

Buy Oracle products. Because Larry Ellison needs a bigger yacht, more CA beachfront property, or another Hawaiian island. It's the right thing to do, because Larry is a caring kind of guy.

6/4/13 Update. This just gets gets better and better. See Java Security Revisited--Part 1.

Sunday, May 5, 2013

I Really Need a Document Management System

[gregm@feynman ~]$ du -hs $HOME
52G     /home/gregm
[gregm@feynman ~]$ find $HOME -type f -name *pdf | wc -l
[gregm@feynman ~]$ 

That 'working set' is nothing, in terms of storage requirements; it's a speck on Terrabyte drives. The requirements are so low because there are very few media files. I don't 'do' media, as a rule. I want my favorite entertainment producers to get paid, so that they can keep entertaining me. I buy DVDs, have only ripped MP3s from CDs that I bought (years ago, and still have) etc.

But there are also large numbers of text files (notes and code), email, spreadsheets, etc. That stuff, combined with the PDFs, is important, in the sense that having this stuff on tap, efficiently accessible, is directly related to whether I can make a living at protecting people and the things that are important to them. I'd obviously like to continue to do that, as it's one of the more worthwhile things I can do with my life. And life is all too short.

Currently, I am most concerned with the PDFs. Some small percentage are chapter-by-chapter downloads of books from my Safari account. Those can be recreated, and there are other examples of PDFs that I am not very concerned about. But a large number, were they lost, would be difficult to replace, for a variety of reasons. 
  • An academic has changed positions
  • A commercial entity has ceased operations, deleted old files, etc.
  • A security industry private researcher has lost interest and allowed their site to lapse
  • I do not have data on why that missing file is important, and/or the source
  • Other. And 'Other' is large.
I'm very old-school about having a well-organized file system; I know how my directories are organized, and I'm far from reliant the file indexing systems that bog so many systems down. Nor am I fan of various 'tagging' systems; their usefulness seems ephemeral in that it's mostly useful in the scope of a single project, or a small number of related projects. It is perhaps likely that these would also be ephemeral, while I am also interested in the broad sweep of history, and how these things evolve over time. 

The notion of keeping a good mental reference breaks down at 5000 files. Is something filed under privacy, breaking anonymized data, or what?

I need a more formal document control, or library system. Current approaches seem to revolve around the Semantic Web (e.g. the 2012 ACM Computing Classification System ( is one approach, etc. One program I am particularly interested in is Invenio, which has roots from CERN (birthplace of the Web, and home of the Large Hadron Collider), but is now a collaboration involving The Stanford Linear Accelerator Center, Fermilab, and others. Details are at

Thursday, March 28, 2013

Plans Wrecked by Internet Drama

My plans for the day were wrecked by Internet Drama. A DDoS attack on Spamhaus made it to the New York Times. Various providers jumped into the discussion with Words of Marketing, etc. This is all fairly typical, with one proviso.

Toward the end of last year, I gave a series of brown-bag lunch talks in Portland. These people didn't have a huge budget, but they were great to work with. They paid drive time (I'm pretty busy, and not willing to write off the opportunity cost) gave me a white board, and didn't limit the discussion to the announced topic.

I got to yack, and the discussions covered a lot of ground. Here are two points, both related to that client, who will remain nameless, for what should be obvious reasons.

You keep saying complexity is the enemy of security. Why?

Because I screwed up. I didn't really say that correctly. Complex is not the same thing as complicated.  This is the third time this topic has come up. Think of this in terms of reliability engineering. If you have two components, both which are 90% reliable, what is the reliability of the composite system?

0.9 x 0.9 = .81

And we are on our way to FAIL. This isn't limited to software. Read up on the Challenger disaster, and the systemic failure of internal NASA mechanisms to provide even remotely accurate risk analysis.

What does this have to do with DNS?

One of the branches that those discussions took was about DNS, in the context of things I immediately look for when doing an audit or pen-test. I've done work for a couple of orgs that stuck a DNS server on the DMZ, and pointed everything, including internal desktops, at it.

This is not the best of all possible plans.
  • A publicly-accessible server controls your entire infrastructure.
  • You surrender the ability to mitigate a large percentage of targeted email attacks.
  • You surrender the ability to do important real-time threat analysis.
  • You enable distributed attacks against anyone.
  • You are probably a long way way from being able to roll out DNSSEC, should that be in your plans.
If you have an Internet-facing DNS server, it should only provide authoritative resolution. If it isn't your domain, don't answer queries.

It turns out that while this was covered in one of brown-bag lunches, it was never fixed. It was going to go into the Q1 budget, but that didn't happen.  Here we are at the end of Q1, and it bit them. An innocent mistake. Happens all the time.

That doesn't mean that there is no cost involved with what was essentially an unforced error. They have a capex they had forgotten about, and an opex that they had mostly paid for (my brown-bag talks) but didn't use. Now they will be paying me a bit more to set everything up, write a couple of scripts, document everything, and train it. And create a reporting system so that mangers have some assurance that there is no recurrence.

This is probably going to triple their outlay, not including hardware costs. Another loss, which is hard to evaluate, is the opportunity cost of not having their own people create the solution.

It is obviously useful to have a third party provide a sanity check of your security posture; that is much of the value of an audit. But the training value of building a competent in-house security team is large, and it costs little to capture it.

Monday, March 18, 2013

Attribution: On the Shoulders of Giants

If I have seen further it is by standing on the shoulders of giants.
--  Isaac Newton, 1676

Attribution is a vital thing. One of the more fubar things about 'security researchers' is that they do not always rigorously credit people that laid the foundation that they are building upon.

If we have attribution, we can trace the evolution of thought in all of science and mathematics. The knowledge that we would impart to others gains context. This isn't a corporate, marketing thing, or the current round of patent fights; it's more about how your children will lead better lives.  Attribution is important. because it is the scaffold upon which we build, and can trace, the most important advances the human race has ever achieved.

No, I am not writing about the development of brewing. Yes, beer is important. But I'm not going there right now. It's more important that I say something that should probably determine whether this thing is worth the time it takes you to read it.

I have a couple of rules about how I will post, the first of which is that I don't name people or clients without permission and attribution, and I don't believe in failing in either.

I have permission to admit I worked a gig at Fiserv. I negotiated that going in, as it was obviously going to be a fairly long-term thing, and I was once an employee there, doing a lot of security-related things related to Linux, HP-UX, cloud-facing servers, etc.

Where I can do proper attribution, I will. In some cases, I have permission from Fiserv. In other cases, I was on other gigs, and absolutely do not have permission to do any traceable attribution. That was also understood from the start, and anything related to that work can only be discussed in theoretical terms. I wish it were otherwise, but this is the fubar world we live in.

Bye for now. There is yard work to be done, this is the Northwest, and the weather forecast (fubar that it is) says rain for the next few days.

Sunday, March 17, 2013

Hello, and welcome to fubarnorthwest

Hello, and welcome to fubarnorthwest. If you don't get the title, that's because it's fubar; a term anyone who has any business here already understands. The northwest bit got tacked on because that's where I live, I love it, and all the cool names were already taken.

I want to talk about a lot of things, mostly related to how we should fix the huge problems in computer-related security. I don't know if this venue will meet my needs. In one sense, I know it won't: personal privacy is security personalized, and Google is dedicated to invading privacy. It's a big piece of their business model.

I am a big fan (as much as I am a fan of anything in a fubar world) of irony.

If we lay aside the marketing, and attempt to get to the truth of things, there is little in this world that isn't fubar. Not operating systems, the smartphone you favor, your best-loved camera or programming language, your software repository or your system of government.

It's all fubar to most of us, because we all have different priorities.