Friday, June 12, 2015

Does the Navy Buy Vulnerabilities Too?

Commentary::Disclosure
Audience::All
UUID: e62e4ab8-ad7e-449d-9a4b-d2f2f2dd459e

This morning, I happened across this dead as I write this link. It goes to the FedBizOpps.gov site, and was original for Solicitation Number N0018915T0245, titled 70--Common Vulnerability Exploit Products. I happened to open it in another browser because I was curious about a rendering problem, which can be seen in the text below. I suspected it was due to the common mislabeling of content as charset=iso-8859-1 in HTML files.

By 2015-06-12 0715 PDT it was gone; a reload in that browser landed on a search page.  Back in the original browser, I saved a copy of the original solicitation as usn_exploit_request-1.pdf (229 KiB).

For a very few minutes it could be found by solicitation number from that search page, though the link presented did not do anything when clicked. That result became usn_exploit_request-2.pdf (131 KiB).

Within a very few more minutes it had disappeared from the search, and could no longer be found at all, by solicitation or title, even when the the search included both active and archived documents. I included archived documents purely because I thought that even though it was well before the original archive date, perhaps the request had been filled, and the document archived early. That result became usn_exploit_request-3.pdf (183 KiB).

It seems to have been simply deleted. There are many reasons that this might happen. Perhaps too many news sources had discovered it, it was causing an unfavorable reaction, and it was pulled for simple PR reasons. Though one takeaway from this is yet another lesson in not assuming that government archives are complete.

For those who don't want to look at PDFs, here is some of the relevant text, emphasized, with a bit of commentary from me.

This is a requirement to have access to vulnerability intelligence, exploit reports and operational exploit binaries affecting widely used and relied upon commercial software. 

In a bit, they become rather more focused of exploits than the defense side of things.

These include but are not limited to Microsoft, Adobe, JAVA, EMC, Novell, IBM, Android, Apple, CISCO IOS, Linksys WRT, and Linux, and all others. 

So, all of the most commonly-used operating systems, including mobile, an interest in storage (and possibly VMWare), and some common networking gear (including a wireless router commonly deployed in home, small branch offices, etc.). As well those long-time security horror stories, JAVA [sic] and Adobe.

The vendor shall provide the government with a proposed list of available vulnerabilities, 0-day or N-day (no older than 6 months old). This list should be updated quarterly and include intelligence and exploits affecting widely used software. The government will select from the supplied list and direct development of exploit binaries.

So, either 0-day, or at least not too stale.

Completed products will be delivered to the government via secured electronic means. Over a one year period, a minimum of 10 unique reports with corresponding exploit binaries will be provided periodically (no less than 2 per quarter) and designed to be operationally deployable upon delivery.

This qualifies as high volume.

Based on the Governmentâ€TMs direction, the vendor will develop exploits for future released Common Vulnerabilities and Exposures (CVEâ€TMs). 

An obvious flaw here is that not even remotely all vulnerabilities ever receive a CVE number. Assignment of a CVE number, to the extent that it has any effect at, would tend to decrease the number of vulnerable systems, shortening the useful life of the vulnerability that the Navy had just purchased. Naval armament apparently includes footguns. Also, here is that rendering flaw.

Binaries must support configurable, custom, and/or government owned/provided payloads and suppress known network signatures from proof of concept code that may be found in the
wild. 

Suppress is a poor choice of words. What they are after are exploits that don't present a signature that is already known to suppliers of Network Intrusion Detection Systems (NIDS). I am curious about why host-based antivirus and IDS (HIDS) isn't mentioned.

Innocent? Incompetent? Generic FUBAR?

This could be completely innocent; even an interest in 0-day or low n-day exploits may be an effort to provide their penetration testers with better tools. In the few contest between government employees and the private sector that I am aware of, feds of any stripe were trounced.

So, why was it pulled? Bad PR? Poorly written? Even a mistaken project approval? These are all possibilities, but it seems just as likely that it was a coordination issue. That could take a couple of forms. One is purely financial: duplicate efforts between government departments might well lead to the same exploit being purchased, perhaps from two different vendors. 

The second form involves operations. Suppose that the Navy is unknowingly using a given vulnerability against a target of value x. Meanwhile, some random three-letter agency is using the same vulnerability to collect against a target of value 10x. If the Navy were detected, and a NIDS signature is created, the random three-letter agency could lose access.

Whatever the reason, it is not a sterling example of government competence. Someone needs to go shine their Cyber or something.

Tuesday, May 26, 2015

Anti-tracking May Lower Temperatures, and It May Not Matter

Commentary::Reliability
Audience::All
UUID: c078319b-b156-404f-a48b-1e639dd734b6

Earlier today, in the midst of an ongoing project, I noticed that

  1. The temperature of a single physical CPU was running at 104° F; about 10° hotter than expected.
  2. There were a large number of Firefox tabs open (40-odd), as is typical when abnormal high temps are seen.
My first reaction was my normal knee-jerk: This is totally FUBAR. The extent of tracking of Web tracking creeps me the hell out, and long experience with hardware has led to a lot of exposure to the notion that increased temperatures lead to decreased service life.

Knee-jerk reactions seldom lead to any good outcome.

Step one was to take a quick a quick shot at verifying the problem. Since Firefox 35, we have been able to set privacy.trackingprotection.enabled=true in about:config. I had done that the day before, (before the problem was noticed) but had not restarted Firefox. This time I bookmarked all pages, restarted Firefox, and reloaded all tabs. Temps returned to normal. Though based on a single datum, I may be able to assign a provisional cause. Go, me! Possible progress! I did some ancillary things, such as noting before and after memory usage (in case the kernel scheduler was part of the problem), etc.

None of that really mattered, though. In the greater scheme of things, it seems likely to be irrelevant. At the very least, a lot more open research is needed.

Temperature

First off, the widely-taught inverse correlation between temperature and lifetime, may be entirely bogus over large domains, and seems highly likely to be far more nuanced than is often taught.

Perhaps it matters in, say, applications related to RF power systems, such as radars and electronic countermeasures, but I haven't worked in those fields in years. Though messing up fire-control radars was tons-o-fun. I care a lot more about, to use an overly-generic term, IT.

HPC centers, the hyper-scale service providers, and large enterprises, all care about bills due to power. Supply costs, conversion efficiency, what is devoted to heat dissipation, thermal effects on the longevity of vast fleets of servers, etc.

Google does not provide OpenSource code at anything like the rate that they consume it, but they do provide landmark papers, which is at least partial compensation. Failure Trends in a Large Disk Drive Population (2007) was such a paper, and it implied that increased temperatures enhanced longevity.
Temperature Management in Data Centers: Why Some (Might) Like It Hot (SIGMETRICS12, University of Toronto) extended those results to DRAM, set some boundary conditions, etc.

In the same year (2012) No Evidence of Correlation: Field failures and Traditional Reliability Engineering was published, but I have not digested that yet. It's corporate, and I've only recently discovered it. I'm interested in the intersection of security and traditional reliability engineering (it's the 'A' in the CIA security triad, after all) you might want to read it as well.

Obviously, this is nothing like a comprehensive literature search. But I really doubt that simplistic schemes purporting to draw an obvious inverse correlation have any merit.

Tracking

Without taking extraordinary measures, anyone using the Web is going to be tracked. Usually very effectively, because tracking was baked into the Web, from protocols to economics, from the start.

Unfortunately, this post has gone on for too long. Not in terms of what should be covered, but in what I have time to cover. It's 1915, there are still Things To Do, and it is already going to be a late night.

Some things are going to be left for a possible future post. I tend to want to leave this sort of thing to more consumer-oriented security sites, where 'Don't Run Adobe Flash' might possibly help someone. An obvious problem is that many of the consumer sites do not cover tracking issues, and some of those that do are either biased or intentionally misleading. That sucks, but it isn't as if I am going to write a definitive post, complete with an economic history, this evening.

Wednesday, May 13, 2015

Open Thread: Is There Any Point in a Security Blog?

Commentary::Internals
Audience::All
UUID: 1af6f74e-015a-4cc6-a668-181a083b1850

Earlier today, I published #101, since 2013-03-17. A bit of a milestone, I guess, though I don't pay much attention to that sort of thing; I totally missed #100.

It does bring up a bit of a question, though. Some time ago, I mentioned that I wanted the date of publication right up top, where viewers would immediately see it. Because information gets stale rapidly. Arriving on a blog post from a search engine, reading some lengthy post, and then discovering that it is five years out of date (if you can discover it at all) is FUBAR.

It is even more FUBAR if you consider how many servers may have been incorrectly configured because due to dated information, etc. This one of the several reasons that blogs, particularly security blogs, and most definitely this one, suck. They are little, if at all, better than security trade press.

Here's the thing. At 100 or so posts, I can still maintain a mental image of what I have written in the past. I can go back to previous posts and and post an update.

This will not scale. What's more, I have an idea of posting about common themes (things that the security community might do better) that might conceivably have a greater impact. If I were to become successful at that, the specific content of individual posts on a given topic (log analysis comes to mind -- I could go on about that) is going to blur together. Success at one goal seams likely to lead to failure at another.

But I can't really set aside a block of time each month month, and delete the old stuff. First off, time is scarce. Second, I will break links from more-recent posts to what has become background information.

A blog seems to not be an appropriate tool. A wiki, or a document series on GitHub, might be more useful. Or perhaps using this blog to announce revisions to either. The thing is, there is a critical mass at which a community forms, feedback is received and acted upon, etc. A rule of thumb seems to be that perhaps one in a thousand blog viewers will comment. This blog gets a few hundred visitors per month, so it seems unlikely that a critical mass will ever be reached.

Perhaps I am wrong about this, and I just needed to announce an open thread. OK. Done and dusted. I have my doubts, but the idea has to be given a chance, if only to give potential community members a voice in describing something that might better fit their needs.










A SOHO Router Security Update

Commentary::Network
Audience::All
UUID: 6cb54b70-6f80-4959-bb8b-c8d20fc07e93

In April, 2014 I published Heartbleed Will Be With Us For a Long Time. One point of that post was the miserable state of SOHO router security. I referenced /dev/ttyS0 Embedded Device Hacking, pointing out that /dev/ttyS0 has been beating up on these devices for years. If you don't feel like reading my original post, the takeaway from that portion of the post is as follows.
Until proven otherwise, you should assume that the security of these devices is miserable. I have private keys for what seems to be 6565 combinations of hardware/firmware combinations in which SSL or SSH keys were extracted from the firmware. In that data, 'VPN' appears 534 times.
The database was hosted at Google Code, which Google has announced will be shutting down. I am interested in the rate at which embedded system security is becoming worse (as it demonstrably is) and meant to urge /dev/ttyS0 to migrate, if they hadn't already done so. I wanted the resource to remain available to researchers. Google Code doesn't seem to provide (at least in this case) a link to where migrated code might have gone, but searching GitHub turns up four repositories. Apparently I am not the only person interested in the preservation of this work, and the canonical /dev/ttyS0 repository is still available.

/dev/ttyS0 also has a blog. Visiting that today, I find that they have recently been beating up on Belkin and D-Link. That's a bit sad, because in simpler times, I carried products from both of these vendors in my hardware case.

There is no room for sentimentality in this business. But there is room for keeping track of trends, and gazing into an always-cloudy crystal ball, trying to extrapolate trends, and spot emerging threats. Sometimes that is ridiculously easy; I hereby predict:

a) the Internet of Things will be a source of major security/privacy breaches in 2015 [1]
b) consumers will neither know nor care, in any organized manner
c) businesses will continue to buy 'solutions' that are anything but

In short, things will continue to get worse, at an increasing rate, as they have always done.

[1] I often tell a simplistic story (to non-practioners) about how I came to be interested in security and privacy, equating the two as a simple scaling matter. Privacy is security on a small scale, and the obverse. That is not actually true; there are technical differences, down to the level of which attacks are possible, let alone which matter. But that is a whole different post.


Thursday, May 7, 2015

Sharing is Complicated

Commentary::Internals
Audience::All
UUID: bd74c00b-02cd-42b4-8d62-514dfab4b217

There are a lot of things I want to share, from images to code. Roadblocks are often unexpected, and can be weird as hell e.g. file-naming issues with my camera that began at the same time that I modified the copyright information that is stamped into EXIF data. The solution to that probably involves adopting something like the UC Berkeley calphotos system http://calphotos.berkeley.edu/, and writing a bit of code to support a new pipeline. Also known as a workflow, and which term is used is suggestive of many things. But I digress. Most popular articles (and at least some software) related to how to image storage and retrieval are overly simplistic. Duh. In other exciting news, the Web has been found to be in dire need of an editor.

Sharing documents (specifically including code) is also an issue, and one that is a bit more important to me at the moment.

I don't want to get into the version control Holy Wars. Use git, mercurial, subversion, or even one of the proprietary systems. Whatever. If I had to guess, it would be that how well you can use the tool will in most cases outweigh the power (and idiosyncrasies) of the tool.

That said, this is about github, because this post is about sharing.

Github suffers, periodically, from DDoS attacks, which seem to originate from China. I say 'seem to' because attribution is a hard problem, and because US/China cyber-whatever is increasingly politicized, and this trend is not going to end any time soon.

Points to Ponder

a) Copying of device contents as border crossings are made. There have been State Department warnings on the US side of the issue, but at least one security actor, justly famous for defeating TLS encryption with the number 3 (that is not a joke, search on Moxie Marlinspike), has been a victim as well. There is some question as to whether my systems could be searched without a warrant, due to my proximity to an international border. Nation-states act in bizarre ways, the concepts of 'truth' and 'transparency' seem to be a mystery to national governments, and I do not regard it as impossible that the US would mount a DDoS on GitHub, if a department of the US government thought it both expedient and deniable.

b) Is China a unitary rational actor? On occasion, acts of the Chinese government seem to indicate a less than firm grasp of what other parts of the government are doing. A culture of corruption is one issue, but there are others, such as seeming amazement at adverse western reactions to an anti-satellite (ASAT) missile test back in 2007. Which was apparently quite the surprise to western governments, and makes me question what all of this NSA domestic surveillance effort really accomplishes. I won't digress into that can of worms, other than to note that there is much evidence suggesting that the US may not be a unitary rational actor, either.

Circling Back to GitHub

The entire point of a distributed version control system, of whatever flavor, is availability. Yet there are trade press stories dating back a couple of years, at least, about widespread outages due to DDoS attacks. The most recent one that I am aware was in April of this year. In every case, much panic and flapping of hands ensued. Developers couldn't work. Oh noes!

That rather blows the whole point of GitHub out of the water, doesn't it? The attacking distributed system beat up on your distributed system. Welcome to the Internet Age, cyber-whatever, yada yada yada. Somewhat paradoxically, a good defense involves more distribution, and not allowing GitHub to be a sole point of failure.  

The problem is pipelines. Or, again, workflows. A truly resilient system needs more than something that has demonstrably had accessibility issues for years, and the problem is two-fold.

1) There is no fail-over.
2) The scripts that drive it all tend to be fragile.

It is entirely possible to build a robust system, hosted in the DMZ or in the cloud, as a backup to GitHub. Most of this is just bolting widely available Linux packages together, and doing the behind-the-scenes work. With an added component of writing great doc; the system will only be exercised when things have gone to hell, and everyone is stressed. If there were ever a time where great doc were a gift from $DEITY, this would pretty much be it. Because Murphy is alive and well, so some periodic fail-over test (you do that, right?) probably got skipped for some reason.

At this point I am going to be polite and just mention that the DevOps community might do a bit more work in getting some best practices information to the public. If GitHub is more important than just free hosting (and it may not be, for completely valid reasons) please build an adequate system. It will save you from having to publicly whine about how your distributed system did not turn out to be resilient.


Monday, April 20, 2015

Exploring System Data: Use Anything but bash.

Recommendation::Language
Audience::Intermediate
UUID: 4e163e7c-ec63-430e-83e2-605e9df95526

In a gmail conversation related to changes to the Linux kernel, I asked whether anyone still used gnuplot, which was used in the example. Because one of the first things you do when exploring data is to look at the distribution. Duh.

Of course, I am sure that gnuplot is still in constant use. People don't scrap production systems simply because something is more fashionable. Or they shouldn't, anyway. The math is not favorable.

As a side note, I really need to take a decision on how I want to display math on this blog.

I started a project related to data analysis using some old-school techniques, all based around shells. Shells can be a win, for answering questions such as, "How is this new application changing my system." That can be important. I've seen Web application servers deployed before the location and content of log files was known, much less characterized at a level of, "What sensitive information might be written if the log level is DEBUG?"

Shells are fine for that sort of fast initial cut. The problem is people don't want to throw that code away. They keep writing one more grep statement, or whatever. My personal alarms tend to ring at arrays. If the system becomes complex enough that I need arrays, I am going to question the wisdom of doing it in a shell.

  • They aren't POSIX, so you become wedded to one particular shell. Want to use dash instead of bash? Sorry, but you can't.
  • You can't pass arrays to functions, if you need to do something more complex than loop over them. Even for that, you are probably going to use a reference. Modify them? Sorry, but you can't.
  • You can't even take a slice of an array. Sorry, but you can't.
  • What stop-gaps exist for dealing with arrays, or even faking them if they aren't available in your shell, tend to use 'eval'. Which adds a whole new layer of potential security issues. Sorry, but you really shouldn't.
Shell arrays don't do anything more complex than map strings to integers. Except in the case of bash associative arrays, which are a newer, shinier, and deeper can of worms. The point is that the most advanced data structures available in shells are not really suitable for building a software with any sort of complexity.

I pretty much won't start down that slippery path, any more, and I hope that you won't either. I tend to use Python. If you prefer Ruby, have fun. It's just too slow for me, but it's also widely used in the security community, including in the Metasploit framework.

There is value in knowing pretty much any language, especially in the security field, if for no other reason than to know how problems with them can be exploited. That is not an argument for falling into the same trap of eval'ing something, wedding yourself to sanitization problems, because you pushed the language too far.

2015-04-23 Addendum

The power of the shell is seductive. I still use it all the time. Moments ago, on a Linux machine:

# ls -lh /var/log/secure
-rw-------. 1 root root 8.3M Apr 23 13:24 /var/log/secure
# wc -l /var/log/secure
69397 /var/log/secure
# grep hddtemp /var/log/secure | wc -l
69319
#

But this is not a mechanism for monitoring log growth. I can immediately see log file size, that (non-SELinux) permissions are correct, and that this one is mostly about monitoring a drive temperature. 

The problems will surface when I try to reuse these commands: add -Z to /usr/bin/ls will show me the SELinux context, find lines that aren't about hddtemp, etc. But in scripts, to start with, you should not parse the output of /usr/bin/ls. stat(1) is your friend (and don't forget to supply, and appropriately quote, a format string).

The shell gives you a lot of power to spot-check things. Leave it at that, and save yourself some grief.







Friday, April 17, 2015

If Leave the ACM, Some of Their e-Mail Becomes SPAM

Really, people. If I tell my rep that I will not be renewing, most renew-now messages stop. This is not the case with with the list servers. The ACM Bulletin, TechNews, and whatever else you may be subscribed to will continue on their merry way.

At a certain point, this becomes spam. If, after all, I regarded those lists as being extremely valuable, I would likely never have left ACM in the first place.

Just saying.

There's a lot going on right now. which is how it comes to be that my first post of the month is on the 24th. Over a month since my last post. So I have to regard ACM mail as something that should have vanished at the end of last month. Just random stuff that I have to send unsubscribe messages to.