Tuesday, May 26, 2015

Anti-tracking May Lower Temperatures, and It May Not Matter

Commentary::Reliability
Audience::All
UUID: c078319b-b156-404f-a48b-1e639dd734b6

Earlier today, in the midst of an ongoing project, I noticed that

  1. The temperature of a single physical CPU was running at 104° F; about 10° hotter than expected.
  2. There were a large number of Firefox tabs open (40-odd), as is typical when abnormal high temps are seen.
My first reaction was my normal knee-jerk: This is totally FUBAR. The extent of tracking of Web tracking creeps me the hell out, and long experience with hardware has led to a lot of exposure to the notion that increased temperatures lead to decreased service life.

Knee-jerk reactions seldom lead to any good outcome.

Step one was to take a quick a quick shot at verifying the problem. Since Firefox 35, we have been able to set privacy.trackingprotection.enabled=true in about:config. I had done that the day before, (before the problem was noticed) but had not restarted Firefox. This time I bookmarked all pages, restarted Firefox, and reloaded all tabs. Temps returned to normal. Though based on a single datum, I may be able to assign a provisional cause. Go, me! Possible progress! I did some ancillary things, such as noting before and after memory usage (in case the kernel scheduler was part of the problem), etc.

None of that really mattered, though. In the greater scheme of things, it seems likely to be irrelevant. At the very least, a lot more open research is needed.

Temperature

First off, the widely-taught inverse correlation between temperature and lifetime, may be entirely bogus over large domains, and seems highly likely to be far more nuanced than is often taught.

Perhaps it matters in, say, applications related to RF power systems, such as radars and electronic countermeasures, but I haven't worked in those fields in years. Though messing up fire-control radars was tons-o-fun. I care a lot more about, to use an overly-generic term, IT.

HPC centers, the hyper-scale service providers, and large enterprises, all care about bills due to power. Supply costs, conversion efficiency, what is devoted to heat dissipation, thermal effects on the longevity of vast fleets of servers, etc.

Google does not provide OpenSource code at anything like the rate that they consume it, but they do provide landmark papers, which is at least partial compensation. Failure Trends in a Large Disk Drive Population (2007) was such a paper, and it implied that increased temperatures enhanced longevity.
Temperature Management in Data Centers: Why Some (Might) Like It Hot (SIGMETRICS12, University of Toronto) extended those results to DRAM, set some boundary conditions, etc.

In the same year (2012) No Evidence of Correlation: Field failures and Traditional Reliability Engineering was published, but I have not digested that yet. It's corporate, and I've only recently discovered it. I'm interested in the intersection of security and traditional reliability engineering (it's the 'A' in the CIA security triad, after all) you might want to read it as well.

Obviously, this is nothing like a comprehensive literature search. But I really doubt that simplistic schemes purporting to draw an obvious inverse correlation have any merit.

Tracking

Without taking extraordinary measures, anyone using the Web is going to be tracked. Usually very effectively, because tracking was baked into the Web, from protocols to economics, from the start.

Unfortunately, this post has gone on for too long. Not in terms of what should be covered, but in what I have time to cover. It's 1915, there are still Things To Do, and it is already going to be a late night.

Some things are going to be left for a possible future post. I tend to want to leave this sort of thing to more consumer-oriented security sites, where 'Don't Run Adobe Flash' might possibly help someone. An obvious problem is that many of the consumer sites do not cover tracking issues, and some of those that do are either biased or intentionally misleading. That sucks, but it isn't as if I am going to write a definitive post, complete with an economic history, this evening.

Wednesday, May 13, 2015

Open Thread: Is There Any Point in a Security Blog?

Commentary::Internals
Audience::All
UUID: 1af6f74e-015a-4cc6-a668-181a083b1850

Earlier today, I published #101, since 2013-03-17. A bit of a milestone, I guess, though I don't pay much attention to that sort of thing; I totally missed #100.

It does bring up a bit of a question, though. Some time ago, I mentioned that I wanted the date of publication right up top, where viewers would immediately see it. Because information gets stale rapidly. Arriving on a blog post from a search engine, reading some lengthy post, and then discovering that it is five years out of date (if you can discover it at all) is FUBAR.

It is even more FUBAR if you consider how many servers may have been incorrectly configured because due to dated information, etc. This one of the several reasons that blogs, particularly security blogs, and most definitely this one, suck. They are little, if at all, better than security trade press.

Here's the thing. At 100 or so posts, I can still maintain a mental image of what I have written in the past. I can go back to previous posts and and post an update.

This will not scale. What's more, I have an idea of posting about common themes (things that the security community might do better) that might conceivably have a greater impact. If I were to become successful at that, the specific content of individual posts on a given topic (log analysis comes to mind -- I could go on about that) is going to blur together. Success at one goal seams likely to lead to failure at another.

But I can't really set aside a block of time each month month, and delete the old stuff. First off, time is scarce. Second, I will break links from more-recent posts to what has become background information.

A blog seems to not be an appropriate tool. A wiki, or a document series on GitHub, might be more useful. Or perhaps using this blog to announce revisions to either. The thing is, there is a critical mass at which a community forms, feedback is received and acted upon, etc. A rule of thumb seems to be that perhaps one in a thousand blog viewers will comment. This blog gets a few hundred visitors per month, so it seems unlikely that a critical mass will ever be reached.

Perhaps I am wrong about this, and I just needed to announce an open thread. OK. Done and dusted. I have my doubts, but the idea has to be given a chance, if only to give potential community members a voice in describing something that might better fit their needs.










A SOHO Router Security Update

Commentary::Network
Audience::All
UUID: 6cb54b70-6f80-4959-bb8b-c8d20fc07e93

In April, 2014 I published Heartbleed Will Be With Us For a Long Time. One point of that post was the miserable state of SOHO router security. I referenced /dev/ttyS0 Embedded Device Hacking, pointing out that /dev/ttyS0 has been beating up on these devices for years. If you don't feel like reading my original post, the takeaway from that portion of the post is as follows.
Until proven otherwise, you should assume that the security of these devices is miserable. I have private keys for what seems to be 6565 combinations of hardware/firmware combinations in which SSL or SSH keys were extracted from the firmware. In that data, 'VPN' appears 534 times.
The database was hosted at Google Code, which Google has announced will be shutting down. I am interested in the rate at which embedded system security is becoming worse (as it demonstrably is) and meant to urge /dev/ttyS0 to migrate, if they hadn't already done so. I wanted the resource to remain available to researchers. Google Code doesn't seem to provide (at least in this case) a link to where migrated code might have gone, but searching GitHub turns up four repositories. Apparently I am not the only person interested in the preservation of this work, and the canonical /dev/ttyS0 repository is still available.

/dev/ttyS0 also has a blog. Visiting that today, I find that they have recently been beating up on Belkin and D-Link. That's a bit sad, because in simpler times, I carried products from both of these vendors in my hardware case.

There is no room for sentimentality in this business. But there is room for keeping track of trends, and gazing into an always-cloudy crystal ball, trying to extrapolate trends, and spot emerging threats. Sometimes that is ridiculously easy; I hereby predict:

a) the Internet of Things will be a source of major security/privacy breaches in 2015 [1]
b) consumers will neither know nor care, in any organized manner
c) businesses will continue to buy 'solutions' that are anything but

In short, things will continue to get worse, at an increasing rate, as they have always done.

[1] I often tell a simplistic story (to non-practioners) about how I came to be interested in security and privacy, equating the two as a simple scaling matter. Privacy is security on a small scale, and the obverse. That is not actually true; there are technical differences, down to the level of which attacks are possible, let alone which matter. But that is a whole different post.


Thursday, May 7, 2015

Sharing is Complicated

Commentary::Internals
Audience::All
UUID: bd74c00b-02cd-42b4-8d62-514dfab4b217

There are a lot of things I want to share, from images to code. Roadblocks are often unexpected, and can be weird as hell e.g. file-naming issues with my camera that began at the same time that I modified the copyright information that is stamped into EXIF data. The solution to that probably involves adopting something like the UC Berkeley calphotos system http://calphotos.berkeley.edu/, and writing a bit of code to support a new pipeline. Also known as a workflow, and which term is used is suggestive of many things. But I digress. Most popular articles (and at least some software) related to how to image storage and retrieval are overly simplistic. Duh. In other exciting news, the Web has been found to be in dire need of an editor.

Sharing documents (specifically including code) is also an issue, and one that is a bit more important to me at the moment.

I don't want to get into the version control Holy Wars. Use git, mercurial, subversion, or even one of the proprietary systems. Whatever. If I had to guess, it would be that how well you can use the tool will in most cases outweigh the power (and idiosyncrasies) of the tool.

That said, this is about github, because this post is about sharing.

Github suffers, periodically, from DDoS attacks, which seem to originate from China. I say 'seem to' because attribution is a hard problem, and because US/China cyber-whatever is increasingly politicized, and this trend is not going to end any time soon.

Points to Ponder

a) Copying of device contents as border crossings are made. There have been State Department warnings on the US side of the issue, but at least one security actor, justly famous for defeating TLS encryption with the number 3 (that is not a joke, search on Moxie Marlinspike), has been a victim as well. There is some question as to whether my systems could be searched without a warrant, due to my proximity to an international border. Nation-states act in bizarre ways, the concepts of 'truth' and 'transparency' seem to be a mystery to national governments, and I do not regard it as impossible that the US would mount a DDoS on GitHub, if a department of the US government thought it both expedient and deniable.

b) Is China a unitary rational actor? On occasion, acts of the Chinese government seem to indicate a less than firm grasp of what other parts of the government are doing. A culture of corruption is one issue, but there are others, such as seeming amazement at adverse western reactions to an anti-satellite (ASAT) missile test back in 2007. Which was apparently quite the surprise to western governments, and makes me question what all of this NSA domestic surveillance effort really accomplishes. I won't digress into that can of worms, other than to note that there is much evidence suggesting that the US may not be a unitary rational actor, either.

Circling Back to GitHub

The entire point of a distributed version control system, of whatever flavor, is availability. Yet there are trade press stories dating back a couple of years, at least, about widespread outages due to DDoS attacks. The most recent one that I am aware was in April of this year. In every case, much panic and flapping of hands ensued. Developers couldn't work. Oh noes!

That rather blows the whole point of GitHub out of the water, doesn't it? The attacking distributed system beat up on your distributed system. Welcome to the Internet Age, cyber-whatever, yada yada yada. Somewhat paradoxically, a good defense involves more distribution, and not allowing GitHub to be a sole point of failure.  

The problem is pipelines. Or, again, workflows. A truly resilient system needs more than something that has demonstrably had accessibility issues for years, and the problem is two-fold.

1) There is no fail-over.
2) The scripts that drive it all tend to be fragile.

It is entirely possible to build a robust system, hosted in the DMZ or in the cloud, as a backup to GitHub. Most of this is just bolting widely available Linux packages together, and doing the behind-the-scenes work. With an added component of writing great doc; the system will only be exercised when things have gone to hell, and everyone is stressed. If there were ever a time where great doc were a gift from $DEITY, this would pretty much be it. Because Murphy is alive and well, so some periodic fail-over test (you do that, right?) probably got skipped for some reason.

At this point I am going to be polite and just mention that the DevOps community might do a bit more work in getting some best practices information to the public. If GitHub is more important than just free hosting (and it may not be, for completely valid reasons) please build an adequate system. It will save you from having to publicly whine about how your distributed system did not turn out to be resilient.