Sunday, January 26, 2014

Are Administrative Security Controls Really Effective?

Administrative controls certainly have their place. For instance, I wish that cryptographic keys were generally generated and installed on user systems, wherever appropriate, as part of 'first day on the job' policy.  "Wherever appropriate" is the hard part. Does a new hire need ssh in addition to email? What is the cost of placing such a system in front of HR? Should it be done when traditional HR has quite enough on their plate, simply ensuring that the appropriate forms are signed? Or should it be done after that potentially overwhelming first-day HR experience, when that new hire joins their team. I would argue for the later as required system access will likely be less accurately known at the HR level. That does not have to be the case, but any other approach involves an additional expense associated with maintaining what has become a critical in-house application, for no obvious reason.

These are managerial policy issues. They are notoriously difficult to drive into code, and this is only partially due to worthwhile approaches, based upon the simplicity of good parsers reading flat text files not being widely implemented. Separation of code and data is always a security win, if for no other reason than that the data persistence layer is far easier to audit.

Three notes on the above paragraph


  1. A 'good parser' is one for which a thorough set of tests has been written against, including corner-cases such as only a single name (a so-called mononymous person) being used for a person. E.g. common in Indonesia, the 4th most populous country.
  2. "Easier to audit" enables modern continuous audit techniques by requiring fewer system resources. Lower CAPEX is a Good Thing.
  3. Pluggable data persistence layers are preferred. Flexibility is more cost-effective in the long run. 

Administrative controls are only sometimes effective


A reasonable example would be passwords. A common approach approach is to promulgate complexity requirements based upon exponentiating over password length and character set: such and such a length, require numeric, require punctuation, etc. Let us set aside that modern password-cracking schemes do not often allow for advances in modern cracking dictionaries (which seem likely to be more important than parallel cracking via GPU).

An administrative 'control' is often as simplistic as a policy forbidding writing a password down. One consequence of this that the classic Post-It Note stuck to the side of a display has been driven underground to the extent that password lists have long since migrated to being taped to the underside of the keyboard. There is probably some sort of joke that could be developed from this, related to hiding the problem.

My views on passwords are somewhat heretical, and this was just an example. I'm not going to write about them here.

Bottom line


In common circumstances, the intersection of technical and administrative security controls is far more more important than either one, considered alone.










Friday, January 10, 2014

Non-Slacker Friday

Some people can do humor, or go off on weird tangents on Friday. The canonical example of the latter may be Bruce Schneier's Friday Squid Blogging. I can't do this.

First Off

It is fascinating to watch companies and/or governments try to drop bad news on Friday.

Today, for example, we find that Target seems to be attempting to beat the 2007 TJX record, which affected 45.7 million people. For the 2009 Heartland Payment Systems breach, I don't have any data relating number of compromised records (130 million) to the number of people affected. Heartland also tried to drop this bomb into a day when many people in the U.S.were distracted--Inauguration day. The new Target information in the above link is that more data fields were compromised than previously known/revealed, and that the number of "guests" (also known as customers) could top 100 million.

Somewhere, statistics people will probably be arguing whether or not a record is defined as a complete row in a database, or each field within a row. To me, the answer is obvious--each field. CVV data (that 3-digit security code on the back of a card) for example, will be stored as a field. CVV data has an obvious impact on the risk unwittingly assumed by customers, and it was compromised, to at least some degree.

I think we can all agree that this has been a Bad Thing, so let's just go with that. Particularly as Target news is sensational, but not actionable.

More interesting is how, in an age of cheap illicit attack parallelism, SnapChat might have thought that rate-limiting in their API was any sort of defense against massive data loss. Stolen credit card numbers cost almost nothing in bulk and can be used to spin up Amazon VMs, and renting botnets is also cheap. How, then, is rate-limiting a component of a serious defense?

From what I understand, their entire business model is based around selfies, sexting, or whatever the hell pop culture currently calls sending provocative photos, which are only briefly available. Being a guy, I could wish I got more of those. OTOH, I'm glad I don't hang out with women that don't realize that screen capture software exists. Or (running the risk of turning this into Friday humor) the same tool they used to capture and upload that image (a freaking camera) could be used to capture and upload it on the other end.

I'm going to bottom out on being glad I don't hang out with silly people. Worse yet is that SnapChat turned down a $3 billion offer, from equally silly people at Facebook. Bad call, founders. But at least the SnapChat thing is at least somewhat actionable, in that you can look for, and reevaluate any instances of defense by rate-limiting within your organization. Logging systems are likely a good place to start, though they are also likely to contain an example of rate-limiting as a useful tool: a means of preventing logs from filling with diplicate entries.

Secondary Point

In a rich news week, what may be more actionable is the new extent of cooperation between Red Hat and the CentOS project. I was most interested in reading about it from the CentOS perspective. Most of the relevant information is linked from there. Including Fake FAQs (not Frequently Asked Questions, just the intended message) from both parties.

Some time ago, I gathered and analyzed time-series data on security alerts and updates from CentOS. I found a significant delay, and a correlation to dot releases. It seemed to me that the then handful of people at CentOS were having trouble keeping up with security issues when they were also under pressure to get the next dot release out the door.

Those are old data. I probably have it on a backup, or at least a plot generated from those data. But it probably is not relevant to many. The landscape has changed in that significant resources have become available to CentOS. It is probably easier to regenerate the data than to find the old, and analysis tools have become significantly better.

Tertiary Point

I am way jealous of the people that can do tangents on a Friday. I am usually slammed. Part of that is just project planning; people tend to assign due dates for milestones by the end of a work week. Nice as it would be to rant about that, I do it to myself all the time. That is at least an oops, if not a FUBAR.

This time of year, it's really hard to stay organized. In addition to the news-driven sort of thing, like the above, there are things to do. In no particular order, for what I laughingly refer to as my weekend:
  • planning CAPEX for the coming year
  • looking at some old code that uses week numbers (not a coincidence) for ISO 8601 compliance
  • work on a security model needed to move a database from straight PostgreSQL to SEPostgreSQL

Quaternary Point

There is no quaternary point. Things are busy enough.









Wednesday, January 8, 2014

SSH keys: An Argument for Continuous Audit

Suppose you see this, when connecting to a critical system:

$ ssh [redacted]
The authenticity of host 'redacted (IP # redacted)' can't be established.
ECDSA key fingerprint is b1:fb:be:19:ef:2e:3e:b9:e8:da:f5:72:27:3b:41:b4.
Are you sure you want to continue connecting (yes/no)?

I believe that the correct response to that prompt is, in many cases, 'no'.

Let us set aside common problems
  1. even security specialists (who should definitely know better) tend to burn through this message, and just connect
  2. even security-conscious system administrators often fail to appreciate the fact that key fingerprints have to be communicated to users out-of-band, or they have no effective means of doing so
  3. Point 2 is an artifact of SSH having no concept of a third party vouching for the authenticity of a host certificate, as HTTPS does (though there are huge problems with this as well)
What it comes down to is part of the trust basis lies in files with /etc/ssh. Public and private keys, server configuration, key revocations, are all in here. Files in $HOME/.ssh serve as a modifier. $HOME/.ssh/known_hosts is as critical as any other file in this system. Perhaps more so, as known_hosts replaces the entire functionality of third-parties guaranteeing (unreliably, in the case of TLS) that a server is, in fact, the machine you think you are connecting to. 

So, an ssh user correctly or incorrectly trusts that the connection target is correct, and now, in the light of recent NSA revelations, we have to think about crypto, which leads us to the ECDSA key fingerprint seen above. Bruce Schneier believes that elliptic curve crypto was deliberately weakened by NSA, at least in a TLS context. Like most security practitioners, I believe him to be trustworthy, in the sense that he might be incorrect, though he is extremely knowledgeable, but he would not intentionally mislead anyone. OpenSSH originates from Theo de Raadt's OpenBSD project, and I also regard Theo de Raadt as trustworthy, for the same reasons.

Practitioners might then look at changelogs for their SSH system, or do code reviews. This is an obvious win for transparency, versus binary-only systems, but we do not often have time to do careful evaluations, and make carefully calibrated decisions.

The greatest safety, then, would lie with systems that allow configuration files to be pushed. This is a powerful argument against ad-hoc deployments, if another was needed. It does not, however, provide insight into which machines may have been vulnerable, and for what span of time.

It is difficult to know where critical data resides--my take is that in most enterprises, this is not a solveable problem. A lack of knowledge of vulnerability windows only compounds the issue, and only continuous audit techniques can generate the trustworthy data required to even begin to evaluate risk.