Tuesday, November 17, 2015

If There Were One Feature I Wish Bugzilla Had

Commentary::Performance
Audience::Entry
UUID: ddf3eae9-a84d-4083-987e-a84cf2ec8aec

It would involve track records. Specifically, there is no way to know how many bugs in Open Source software you have reported have never been assigned, and were simply closed due to End of Life (EOL), or what the track record of an assignee (who never addressed it, but simply closed EOL), closed but reopened due to a broken or incomplete fix, no communication whatsover from the assignee, etc.).

I have biases which you should probably be aware of.

  1. I weight communications issues more heavily than some do, because I am usually thinking in terms of security, and taking a sane approach to disclosure. This is often about how well communications works, and timeliness.
  2. Many bugs can fall into a security context, for reasons that may not be readily apparent. Failure of some Linux update mechanisms in terms of alerting on rpm.new/.save files is a pet peeve, as is maintaining performance.

This is about that second point -- maintaining performance. I have another bias there -- if I didn't do security, I would do perf. It's the next most interesting problem. The high-frequency trading community is running without firewalls. Some turn SELinux off not due to usability concerns, but because characterizing perf impacts is difficult. Some run Remote DMA over converged Ethernet (RoCE) for the performance pop without considering the security implications of bypassing defenses built into the kernel.

There are a lot of interesting behaviors out there, and not all are well-considered. The last thing we should be doing is making it difficult to easily explore the performance impacts of things we might recommend. That's a recipe for losing all credibility, and becoming part of the problem.

I've recently destroyed the lab (again), because 2016 is coming up fast, and I wanted a first cut on what the hardware budget should look like. One of the things I wanted to explore is the overhead of rapidly spawning a lot of processes. Likely the last time I would do such a thing, for a couple of reasons.

  1. Amazon AWS probably makes more financial sense than tearing up the lab, though reproducible research is a concern. But Amazon is only one vendor.
  2. 'a lot of processes' is subjective, and relevance is entirely dependent on your *aaS (Infrastucture, Platform, etc., as a Service), your existing or contemplated local/cloud/hybrid security posture, etc.

Given the large number of possible deployment scenarios in modern infrastructure, it would be really nice if even the basics of performance tools were reliable. Nicer still if one could have some confidence that 'bleeding edge' Linux distributions could give us a preview of coming attractions, as used to be the case. Sadly, this is much less the case than it used to be.

In the case of Fedora and the rest of the Red Hat family, it goes beyond default file systems. I could go on about that, but this is about performance testing. So, well past time that I circled back to https://bugzilla.redhat.com/process_bug.cgi (login required), and spawning processes. That submission, for those who do not have a Red Hat Bugzilla account, reads as follows.

Description of problem: /usr/bin/free -s fails for floating-point and integers.

Version-Release number of selected component (if applicable):
3.3.10-5

How reproducible: Always.

Steps to Reproduce:
1. /usr/bin/free -s 1
free: seconds argument `1' failed

2. /usr/bin/free -c 1
works

3. /usr/bin/free -s 1 -c 1
free: seconds argument `1' failed

4. /usr/bin/free -c 1 -s 1
works

Actual results: As above. Verified that c > 1 work, when -c works at all.

Expected results: Functional continuous results from /usr/bin/free, and agreement between man page and program output. From man:
-c, --count count
Display the result count times. Requires the -s option.
-s, --seconds seconds
Continuously display the result delay seconds apart. You may actually specify any floating point number for delay, usleep(3) is used for microsecond resolution delay times.

Currently, it is -s that requires -c. Which is perverse if one wants to use free as a rough and ready means of tracking memory usage as several processes are started, and the time required to do that is unknown. Nor should order of specifying -c and -s matter, which would be a usability bug.

Additional info: Discovered this while attempting to use '-s 0.1', and discovering that even integers did not work.

Tuesday, October 27, 2015

Bad Weather: Must Rant on Scientists

Commentary::Science
Audience::Entry
UUID: 42bea053-2a40-4be6-93d0-dd3e17142907

This the current NOAA weather forecast. Note the absence of blue sky. People whom I respect have fled to San Diego. Why? I'm guessing that it's because they are much smarter than me.


Somewhere between the last week of October and the first week of November, the weather always goes to hell, from Puget Sound to at least the south end of the Willamette Valley. I don't need NOAA to tell me that, which is A Good Thing, as I have already mentioned that NOAA Can't Predict Weather, Can't Secure Their Systems.

I'm still annoyed with NOAA, not least because when they have clearly blown a forecast, as in whatever they predicted is obviously wrong, their updates just ignore it. High temp of 80° F, but it's already 87° by 1130 and climbing fast? Screw it. Keep predicting 80°. Perhaps people won't notice that they are baking.

That's a really bad example, given the time of year, but the frequency of this sort of thing has led me to log at least some of the more egregious examples into daily notes file. So, worth a minor rant-by-example.

I tend to follow the weather fairly closely. I'm out in it a fair amount, though I do tend to back off when it's really rotten. So I follow some weather blogs, and a bit of research. One of the more popular blogs is http://cliffmass.blogspot.com/. He's a University of Washington scientist, and is often spot-on. When he isn't spot-the-FUBAR-off.

Problems? Yeah, I have some. Appeal to Authority flaws, sometimes in the same post that he denigrates those authorities, inconsistency of message, and more than a little hype about certain topics. Again, sometimes in the same post in which he denigrates hype.

Perhaps my biggest problem is that he is a scientist. I am very far from being some sort of anti-science Luddite; I am a heavy consumer of science. But there's a corner case involved, in his field, which seems to be ignored. Mostly, it's about where the money goes. As taxpayers, how are we to judge whether we are receiving value? Why is NOAA not quantifying errors, so that we might judge when forecasts are most likely to be erroneous?

The other axis in that corner case is this.


http://www.probcast.com/ is about probabilities and ensemble forecasting, from UW. It's an experiment, with a useful 'about' page. But still no indication of the circumstances under which it wanders off into left, and it's run by scientists. Who are often the most security-clueless people imaginable. It's somewhat understandable, because they are all about generating new knowledge, and sharing it widely.

Still, having even half a security clue is useful. There is a reason that Linux directories under /home are private in most modern distros. Despite UNIX being historically rooted in research environments.

Now and then (very occasionally) I report a problem to a Web site owner. Not often, because as often as I find them, it would burn too much time. In this case, I got on the phone. The UW person I spoke with had no idea what I was even talking about. She knew of no such site, who to contact, etc. The issue was fixed in fairly short order (do not remember exact timing) but in terms of appropriate response, it was a miserable failure.

Scientists mostly don't do operations. The concept of domain squatting (excuse me,  'cybersquatting') completely escapes them. I'm fine with that, actually. Different fields of endeavor, and I'd rather scientists were paid to do science.

But I get annoyed as hell when some scientist writes about something that is way outside their area of expertise, expecting to be considered an authority.

Wednesday, October 14, 2015

A Tier 1 Information Source: Ross Anderson

Commentary::Sources
Audience::Intermediate
UUID: 34e6bddc-58a3-47a7-a1e2-7e83981bacc8

On 3/20/14 I published Congratulations to Leslie Lamport, winner of the 2013 Turing AwardCongratulations to Leslie Lamport, winner of the 2013 Turing Award, as announced to the public a bit later, in CACM volume 57, number 6 (June, 2014). That post is a bit dated now -- I don't host ACM logos now, the post contains a link to an Adobe Flash presentation, etc. But Lamport did so much work as the original developer of LaTeX, and that is very close to the heart of my documentation production pipeline.

Fine. Times change, sometimes for the better. Flash is finally becoming recognized as the security nightmare that it has always been. ACM still seems to have no concept of this (scientists often have absolutely no clue about security), as their webinars still require it. Another reason (there are several) that I no longer belong to the ACM.

In that issue, there is an article by Ross Anderson and Ross Murdoch. In CACM, it's behind the paywall, but it's also public, as EMV: Why Payment Systems Fail. An extract:

Now that US banks are deploying credit and debit cards with chips supporting the EMV protocol, our article explores what lessons the US should learn from the UK experience of having chip cards since 2006. We address questions like whether EMV would have prevented the Target data breach (it wouldn’t have), whether Chip and PIN is safer for customers than Chip and Signature (it isn’t), whether EMV cards can be cloned (in some cases, they can) and whether EMV will protect against online fraud (it won’t).

More generally, a summary of Cambridge Computer Laboratory research is available from Anderson, as well a more general overview from the Security Group. Once upon a time (actually twice upon a time) CLCAM used to emit "Three Paper Thursday". But it was dependent on the availability of grad students, current research priorities, etc. Back in July of 2014, I asked if he might bring it back, and it turned out to be impractical. That sucked, as I always had a block of calendar time that was devoted to reading it. Largely on the strength of reading the first edition of Security Engineering. There is a second edition out now, and it is even better. It's also available as chapter-by-chapter series of PDFs, for free. My Wiley first edition (2001) cost about what you would expect for a tech book, but that is available as well, so you can compare the two, and get a notion of how Anderson thinks the security landscape has changed. If you are any sort of security worker-bee, you have little excuse for not having read it.

Why post this now? Because UK banks were proven to not be capable of even recognizing the concepts of ethics or morality, as first widely widely published by Anderson. Does anyone think that the Oct 1 shift in liability to merchants, which will undoubtedly drive EMV adoption in the US, will be any different?

I could go on about this, and the ongoing legal battles between the retail and banking sectors. But the post is becoming very long, as is. I'm going to let it ride, at least for now. Suffice it to say that both sectors are worth trillions of dollars per year. When such enormous sums are involved, neither sector will have your best interests in mind, whether you are subject to PCI-DSS self-regulation, or are simply a consumer shopping on the Internet.

I don't do generic blogroll links, either by request or some weird notion. This is my second. The first was to Brian Krebs, largely because he is the best security blogger I know of, in the consumer space. I hate to go there (the consumer security space, not Krebs on Security), which is why my first (10/2/14) source was titled A Brief Foray Into the Horrible.

How I divide news sources into tiers (currently Vendor, then Tiers 1-3) lies somewhere in the area of proprietary, complex (as opposed to complicated), and just really hard to describe. I may post something about that in future (sorry, 'going forward' is the current corporate bullshit fashion) but no promises.

However, I do promise to talk about their take on mobile security. You can read up on it the new link to Light Blue Touchpaper, now appearing under "(Some) Blogs I Read".



Tuesday, October 13, 2015

Intellectual Property: A Useless Term

Commentary::Marketing
Audience::Entry
UUID: 3a061855-fa21-4efc-a0cc-494418698118

I mostly hate the term Intellectual Property (IP), because it is mostly useless. Copyright, patent, trademark, etc., law has little in common, anywhere in the world.  Personal and societal impacts of those laws are similarly disparate, as one would expect.

Now and then, something that seems to absolutely fly in the face of common sense (granting that sense does not seem to be common) is particularly grating. Such was the case with Definitive Guide (TM) to Cyber Threat Intelligence.

There are various regulatory (and attempts to self-regulate, in order to avoid actual regulation, such as PCI) regimes that require a lot of reading related to possible threats. Fine. Been doing it for years, because I regard it as necessary, and so I applaud that. But it is extremely time-consuming: the doc mentioned above is a 74-page PDF, and is only part of today's reading list. The madness that allows a common phrase to be trademarked is annoying as hell.

Still, one of the contexts that I'll be reading this in (and marking it up for future reference) is DevOps, where there might possibly be some insight to be had from Table 1-1. The first table, so I will have to read this entire pile of marketing nonsense, in case there is support for it later in the doc. There might be something relevant to more recent DevOps concepts (and other marketing nonsense that have been about it) as opposed to older security-related divisions (network, infrastructure, and security operations) within an organization, and their contributions at the 'Tactical, Operational and Strategic' levels. Never mind that tactical and operational divisions would seem very artificial.

So, 74 pages that is mostly marketing noise. Some of which is about banks, etc., being clients. I covered that above, writing about regulatory (and avoidance of same) requirements.

That DevOps reference? DevOps is also rife with marketing noise. But much of DevOps does promote ideas related to getting beyond some long-standing (and foolish) IT practices, such as throwing code over the wall. Which makes me far more tolerant of that group.

In the unlikely event that you have some twisted urge to read this doc too, it can be had from https://cryptome.org/2015/09/cti-guide.pdf. I'm not supplying a direct link to these people directly, for two reasons:

  1. Zero desire to provide them any Google-juice.
  2. I expect that some sort of registration would be required, hence you would be bombarded with email marketing for, roughly, forever.

If you never see another post on fubarnorthwest, it may be because reading
Chapter 7, "Selecting the Right Cyber Threat Intelligence Partner," enumerates criteria for evaluating cyber threat intelligence providers.
caused immediate brain-death.

Sometimes I Just Have to Rant

That is a failure on my part. I've been working on a set of three posts that would likely have been more helpful, that do not involve Intellectual Property, and would point readers toward things that are more useful, such as why I am about to recommend the Computer Laboratory, University of Cambridge.  Sorry about that. Coming soon.



Wednesday, September 16, 2015

Long Odds: 7,975 to 1 Against

Commentary::Statistics
Audience::Entry
UUID: df96273e-2da9-4d92-9b6d-cbdfdfb9b5c8

What went wrong by noon. So that you can say, "I am doing way better than that guy."

Waking up at 0430

That's 4:30AM to an unfortunate majority of my fellow citizens. Side note: we are one of a very few countries to use this system, and I hate it. It's harder to parse in software, etc. But leaving the side note firmly aside, waking up at 0430 sucks, unless you have some good reason for it. Which I did not. It just happened, so it sucked.

My month-old coffee maker decided to not make coffee

If I am up at 0430, a cup of coffee will usually sound like a truly fine idea. Grrr, and dig out the drip pot I use when I'm camping. It makes very good coffee, though not good enough to wade through warranty hassles. Because nothing is that good.

Discovered that it had not rained

You might think that no rain is a Good Thing, but then you might not be living in western Oregon, in a year that has set records for number of days above 90° F (I don't mind the Fahrenheit scale, because of better resolution). and an ongoing drought. There was rain in the forecast, but NOAA, as usual, blew it. I had been hoping to prep the ground for a new herb garden this evening, but that ground remains hard as stone.

The clothes dryer does not dry clothes

Forgot to toss wet laundry in last night, when I was working late (I work at least 90% from home), so I did it this morning. Motor runs, and it heats up. But the drum doesn't spin, unless you spin it by hand, which you should not be able to do. Great. Probably a belt has broken, or somehow come off of a pulley. Deal with it tomorrow, since I'm driving into town anyway, for another coffee maker. Because screw waiting for some random warranty process to complete: I'll donate the replacement to some worthy group.

Retreat to data analysis fails

The work projects were all well ahead of schedule, except for one new project that I didn't know the extent of. Fine. The sample is a series of small (all under 4 MB) flat files, and the usual approach for this sort of thing, under Linux, is to poke them with bash shell tools, before loading them into an iPython notebook, and doing the real analysis with the Pandas data analysis library. I went a bit too far in the bash stage, binning some things out with regular expressions in GNU grep. How many results are in a single-digit bin, versus a 10-19 bin?

It turned out that in the first file I examined the numbers were equal, which was, to put it mildly, unlikely. Of course, the now you have two problems RegEx issue came to mind, so I did a bit of staring at code, which revealed no bug. I checked the result three different ways, and found it to be accurate.

Then I did what I should have done to start with, given the way the day had gone far. I checked the other files, and found that this was the only file that generated equal numbers. Given the way the day had gone so far, I was biased toward looking for something else that had gone Badly Wrong, however unlikely, and I wasted time.

7,975 to 1 against

Those were the odds (calculated after the fact) against running into that particular scenario. What are the odds of the day, overall, having gone so incalculably wrong? I have no idea. Because incalculable. Duh. We lack crucial numbers on coffee maker (by manufacturer and model number) failure rates...

There's a well-known example of unlikely things happening on a daily basis. If you have a data source, look at how many cars are registered in your state/province/whatever. Then, next time you are doing some random drive, pick a license plate number, and calculate the odds of having seen it.

There are probably very long odds against it. If not, congratulations may be in order: you may have just had an opportunity to discover, in day-to-day life, how difficult randomness (the source of much cryptography) really is. Consider sources of error e.g. picking a plate close to home, at a time many of your neighbors leave for work, is going to skew your result.

A double-plus-good would arise from seeing such a failure of randomness, and gaining an insight into the power, for good or ill, of data mining.

Monday, September 14, 2015

Risk Transfer Failure: a Possible Example

Commentary::Risk
Audience::Intermediate
UUID: 2f7d104e-1066-4735-a1a7-c1c0f39882f4

In classic risk analysis, if such a thing can be said to exist, there are four categories of risk response.

1. Mitigate
2. Avoid
3. Transfer
4. Accept

where Transfer means transfer part or all of the risk by e.g. insurance, hedging, outsourcing, or partnering. This category is sometimes split into Transfer and Share, giving us five categories.

There are many things that can go wrong in assigning the probability/consequence numbers that drive which response category is chosen. But let us make the completely unwarranted assumption that those are all solved problems, and apply our imperfect solution to the risk of the loss of medical records. The choice to transfer risk is usually made when that risk is evaluated as low probability, but high consequence.

One assumes that this what Cottage Healthcare System did when they chose transfer the risk, via insurance, to Columbia Casualty Company in the form of a "NetProtect360" policy, as they are subject to HIPPA regulation, which amongst other things, requires a risk analysis.

Then, in 2013 32,500 records were stolen, and in a class action, they were sued by their customers for $4.125 million, which Columbia paid. Risk transferred, job done, mission accomplished, right? Well, no.

It turns out that when Columbia paid off on the policy, they reserved all rights, including the right to seek reimbursement. There was an ongoing Department of Justice investigation, which was entirely appropriate, as HIPPA is federal law, and it turns out that Cottage had done some things that were perhaps unwise, to put it mildly, such as placing those records on an Internet-facing anonymous FTP server. Given the current state of network scanning, which is best described as 'constant', this made the actual risk high probability, not low. It also became visible to Google, which made the breach a virtual certainty.

In applying for the policy, Columbia required a "Risk Control Self Assessment", which included such questions as "Do you have a way to detect unauthorized access or attempts to access sensitive information?" and "Do you control and track all changes to your network to ensure it remains secure?", to which Cottage ticked "Yes", and which then became part of the terms of the policy. Access via anonymous FTP is obviously not compatible with either of these.

HIPPA Bites Again

And now we are back to the DoJ investigation. The risk analysis required by HIPAA contains questions such as the following.

  • §164.312(a)(1) Standard Does your practice identify the security settings for each of its information systems and electronic devices that control access?
  • §164.312(a)(2)(i) Required Does your practice have policies and procedures for the assignment of a unique identifier for each authorized user?
  • §164.312(a)(2)(i) Required Does your practice require that each user enter a unique user identifier prior to obtaining access to ePHI?

where ePHI means electronic Protected Health Information. Access via anonymous FTP is obviously a complete non-starter, and Cottage would seem to have violated Code of Federal Regulations Title 45. See 45 CFR 164.312 -Technical safeguards.

We Now Have a Partial Risk Transfer Failure

At the very least, legal fees are mounting. Columbia sought reimbursement in United States District Court, Central District of California. The case was dismissed without prejudice (meaning the issue can reappear) in July, because the policy also allows for resolution by mediation, and both parties have decided to pursue that approach first.

We may never know the degree to which risk transfer failed. Cottage is a not-for-profit, so some information may turn up in financials - a Form 990, or it may fail, and reappear in the courts. But mediation might drag on for quite a while, and I could miss the resolution.

I'm curious because at the moment public information on examples of risk transfer failure is rare. I expect that to change: insurance companies moved into this market some time ago, and they are growing tired of paying out potentially large (Target had $50 million in coverage, which may be insufficient) settlements, particularly where failures on the part of the insured seem provably fundamental and complete.

Takeaways

It seems possible, even likely, that was a rogue server installation. It happens: I've found an Internet-facing rogue myself, which contained sensitive information on a new product. The cause seems to generally be that there was some sort of perceived business emergency, it was supposed to be temporary, and there was a disconnect between operations and security. 

1- Compliance with the terms of an insurance policy must be included in security operations.
2- Whether you have transferred risk or not, pay attention to the fundamentals. Scan your network, and analyze traffic logs.

Also, while it isn't a takeaway from this specific example, be aware of the possible existence of so-called 'Shadow IT' within your organization. The server that bites may exist not in your DMZ, but in the cloud, funded by credit card. Administrative controls fail at least as often as technical controls.

Tuesday, September 8, 2015

Don't Roll Your Own Crypto: Examples

Commentary::Crypto
Audience::Entry
UUID: bb828445-8bfc-4ccf-835b-5fdfa181ffc6

The usual reason given for asking software developers to not roll their own crypto is that anyone can build a cryptosystem that they cannot themselves break. That is perfectly true, but there don't seem to be many concrete examples that might convince the unwary. I'd like to provide a couple.

What most people see are references to Secretary of Defense Donald Rumsfeld's there are known knowns response to a question (much laughter from media pundits), and more scholarly approaches, such as calling it the Dunning–Kruger effect (a cognitive bias) after their widely-quoted Unskilled and Unaware paper.

Lots of people in the security trenches think of it simply as some_random_person who doesn't know that they have no freaking clue. That's a bit unfair, because we all do this, to some degree – that's the nature of a cognitive bias. For example, many people who can actually write would consider my blog posts a prime example. A more appropriate example would be the (several) people I have met who do networking, including quite competent jobs of network architecture, who have never read an RFC, and have only a vague idea of what the Internet Engineering Task Force (IETF) is. In many areas of IT, one can get by perfectly well without deep knowledge. There are fewer such areas in security, and none at all in cryptographic design and implementation.

The IETF are a great example for me to use, because I generally like to make more people aware of their existence, and, in this specific case because I recommend the IETF Crypto Forum Research Group Discussion Archive - Date Index as the first of those concrete examples. Read a few threads, and you will hopefully realize that this is something best left to specialists – those who have a made a career of it.

If that doesn't do it, have a look at Introduction to Modern Cryptography, by Katz and Lindell. You can look inside the second edition on Amazon. The first edition was a game-changer because it widely introduced provable security: the notion that rigorous mathematical proofs of the properties of a cryptosystem were the way forward. Provable security is one of the few means by which cryptography may get past the invent-break-fix stage. Here's a search onBirthday Attack, in the first edition at Google Books. Katz maintains a page on the book linking to reviews, courses where the book is used as a text, and errata, as well as a page on the page on the second edition.

On my shelves, I have only the first edition of Katz and Lindell, from 2007. It impressed me enormously, and now there is a second edition. So why aren't I talking about that edition? Because I am in no way qualified to roll my own crypto, and can't take years to learn the subject. My opinion would be irrelevant, even if I had one. Again, it's a very specialized thing, and unless you are both extremely clever, and willing to devote the better part of a career to it, you really shouldn't go there.

Also, please be aware that provable security is about algorithms. Implementations of those algorithms is a whole different thing, and we as yet have no complete solution to the implement-break-fix cycle. Crypto code is often complex, not least because it needs to defend against things like timing attacks. That famous line about "all bugs being shallow, given enough eyeballs," certainly does not apply here, where the essential requirement are those very few highly qualified eyeballs. Hopefully auditing well-commented code. Let me remind everyone that Heartbleed was an implementation bug, in one of the most widely used protocols (TLS, though also DTLS) crypto libraries (openssl), yet it remained undiscovered for two years.

So, for the third time, don't go there. Because there is already enough suffering in the world. Your time will likely be far better spent along the following lines.

  • understanding what you are trying to secure, and from whom
  • the strengths and weaknesses of various crypto suites and algorithms
  • their applicability to the problem you are currently facing
  • their suitability on your intended deployment platform

About this lack of recent posts

Sorry about that. Well, not really. It's been due to a combination of several things. Some professional stuff that I can't talk about, but mostly because Oregon in the summer. I am pretty outdoors-oriented in my off time, and needed to take advantage of the weather, which will be turning Pacific Northwest bad, which is famously bad, soon enough. 

I expect that now that the long Labor Day weekend is over, the pace will pick up a bit. Standing on the deck sipping coffee at 0530 this morning, with a half moon in the east, and Orion and the Pleiades almost at zenith was great, though I was listening for owls, and didn't hear any. And it wasn't so long ago that sunrise was at 0530, and none of this would have been seen. At the Winter Solstice, when there are less than 9 hours of any light in the sky, and it's likely to be pouring rain during those brief hours, I'll probably be posting a lot more often. 

Thursday, August 20, 2015

The Ashley Madison Breach Was Likely a Good Thing

Commentary::Breach
Audience::Entry
UUID: 4cf5d0bf-381b-4bc5-be2f-f31b8fb0d481

Unless you happen to be a victim, anyway. There is a large set of users who will trust their most sensitive secrets to some random Web site. That has been true since the Web was born, and it isn't going to change, save temporarily. This is one of those moments when a lot of people get outed, and the extent to which it damages them is likely proportional to both their perceived importance (celebrities are not intrinsically important people) and their foolishness.

We can expect a stream of celebrity 'exposures' as news agencies comb a large data dump. Meh. Much noise, signifying nothing. Though I do not doubt that security vendor marketing departments of one sort or another are grinding their heads trying to come up with a way to peddle another White Paper.

The threat, minimal though it is, lies in having someone within your organization who is doubly foolish. Enough to trust that random Web site to begin with, and now willing to do silly things, in the hopes that they will not be outed via what is a very public data dump. In other words, victims of extortion, waiting to happen.

The outcome of that could be very bad indeed. But it really isn't very likely, despite what IT press might indicate.

Take The Register, for example. Always a favorite amongst many IT folk, for the headline comedy, if nothing else. They are unabashedly, well, flamboyant. Now Ashley Madison hackers leak CEO's emails, source code As IBM, Cisco and HP lead the IT pack on adultery website, it seems. They have a couple of bar charts. The first might be of interest to sociologists, as it's about Attached Male Seeking Female, etc. Though one would hope that even practitioners of the soft sciences would do their own research. The second graph is even better. 'Number of Valid Ashley Madison Accounts Among the Largest Tech Companies'. The winner is IBM, with 311. Oh, noes! Until you realize that this is out of a headcount of 379,592 employees at the end of 2014, according to Bloomberg. Eight hundredths of one percent. I doubt that the IBM risk management team is sweating this, and you likely shouldn't either.

What is more likely, in my opinion, is that unteachable people are the greater threat. Those unfortunate people who fall for every phishing scam, click any link in email from strangers, can't exist without a years-old version of Adobe Flash, etc. Yeah. Those people.

Who, by the way, are not mentally deficient, and if you are one of those people making snide remarks referring to 'lusers' instead of 'users', you are part of the problem. If you are heard making a remark like that as you make your way through the cube-farm, you have just alienated the people you are supposed to helping. It's unprofessional, and hasn't been funny for twenty-odd years.

A large subset of the workplace population have certain characteristics that may make a security worker's life more difficult.

  1. An inherently trusting nature
  2. Not comprehending the nature of the environment
  3. Exclusive focus on the task at hand

There's nothing you can do about 1, save waiting for life to burn them enough times for them to develop the required amount of cynicism. Which it undoubtedly will. Number 2 is usually teachable, provided you have not blown any chance of building a rapport via 'luser' comments, etc. Number 3 involves a certain amount of irony, in that we often admire people who can get to that level of focus, and refer to it as in the zone, flow state, etc. That's my personal Happy Place, so I'm sympathetic. Their managers will likely value these employees, because duh. They are very productive.

So there you have it. 2 and 3 mean teaching, which is mind-boggling more effective if you can establish a rapport. 1? We're back to my title. The Ashley Madison Breach Was Likely a Good Thing. IT touches everything these days, from Wall Street to the local tire company. The chances are probably good that the legions of celebrity gossip fans, whom you probably never hoped to reach, will in some way be influenced by this.

How is that not a win?


Tuesday, July 14, 2015

Some Remarks About the Hacking Team Hack

Commentary::Disclosure
Audience::All
UUID: 90e9fc07-e6ab-454d-8265-48876691db93

I have to say, right up front, that I haven't been tracking this too closely. Things have been too busy (with things that I can't write about) for me to do more than follow a bit of the trade press, and do some very minimal exploration. Plus,  it's a bit odd to be doing two posts is a row (7/15/15 update: almost in a row. There is one post between this one and Does the Navy Buy Vulnerabilities Too?) on disclosure. That's a topic that could use a flight of posts, but creating that would require more effort than I am able to supply, given that I regard fubarnorthwest as a sort of twisted alien mutant from the Forbidden Zone hobby, not a business tool. And again, things are busy.

Finding trade press articles is obviously not difficult; it's a huge story. My position is that too much following of trade press is counterproductive. I use various criteria for classifying sources into tiers. A current example would be breathlessly wondering about whether or not a pre-announced TLS bug is "the next Heartbleed." No, it isn't. You can tell, without the bother of reading the story, because it was pre-announced. Too much of that crap gets a source downgraded. More information on how I rate sources is a subject for a future post, but not something with a high priority. If you are curious about it, tell me.

That said, Ars Technica has done solid trade press work on this, with a flight of articles. I'm only going to mention a couple here. But they are all linked in some fashion, so navigation shouldn't be a problem.

Article the First

Hacking Team’s Flash 0-day: Potent enough to infect actual Chrome user
Government-grade attack code, including Windows exploit, now available to anyone.
by Dan Goodin - Jul 10, 2015 2:00pm PDT
http://arstechnica.com/security/2015/07/hacking-teams-flash-0day-potent-enough-to-infect-actual-chrome-user/
I'm going to ignore the "Potent enough to infect actual Chrome user" bit, save to note that browsers are inherently dangerous, and Chrome had unpatched vulnerabilities on the day it was launched, back in August of 2008, because it was built on an older, exploitable version of WebKit. Implicit trust in a Web browser, from any supplier, is a Really Bad Idea.

Have a look at the lead graphic in this article. The one with the caption that says, "A browser-detection script that was part of a Hacking Team Flash zero-day exploit used in an Egyptian campaign."

That is Python, and it is being used to differentiate between Google Chrome and Microsoft Internet Explorer. The thing is, Python is rarely found on Windows systems. The simplest explanation is that Hacking Crew shipped a Python runtime for Windows. Bulky and noisy, but perhaps they just loves them some Python. I know I do. But it seems more likely that in a reasoned analysis, they find it advantageous. I tracked down the source code behind the graphic. This site is under heavy load as I write this, but it is available from https://ht.transparencytoolkit.org/gitlab/Windows-Multi-Browser/deliverables/scout_appended/resources/chrome_non_chrome_filter.py.

We can also infer something about their Python development environment -- that it is built around iPython notebooks. Again, no surprise. I use them too. The clue is (again, heavy load warning) 
https://ht.transparencytoolkit.org/gitlab/Windows-Multi-Browser/deliverables/scout_download/Reame.md. This is a  Markdown file, and it's one of the basic capabilities of iPython notebooks. Not least because you can dress them up with CSS to create elegant documentation. This tends to confirm (not that this is really necessary) that this was a business that paid a good deal of attention to business processes. Such as producing better doc, faster, and cheaper. As I do, and you should. If even the Bad Guys (and Hacking Crew are purely mercenary) are seizing that business advantage, and you aren't, why not?

Article the Second

Firefox blacklists Flash player due to unpatched 0-day vulnerabilities
Also, Facebook calls for Flash end-of-life, so that we can "upgrade the whole ecosystem."
by Sebastian Anthony (UK) - Jul 14, 2015 6:45am PDT
http://arstechnica.com/security/2015/07/firefox-blacklists-flash-player-due-to-unpatched-0-day-vulnerabilities/

I have had my Security Guy hatred on for Adobe products since, well, forever. Was that justified? There are ample reasons for not trusting the track record of a bit software, vis-a-vis a previous track record, as in any way an indicator of the future. To a point. The number of vulnerabilities appearing in CVE or other databases, etc., are all very flawed mechanisms. Papers have been written about it (no, not White Papers, but Real Papers), presentations have been given at security conferences (there are a couple of people I need to contact about this before I say more), etc. And there are some possibly better indicators, such as static code analysis.

Again, we can only make allowances to a certain point. Even if one considers that such software as Flash, running on Windows, is an almost universally installed target, and will receive a disproportionate amount of attention from exploit creators. We may, just possibly, be reaching a point where consumers are just fed up with the constant (FUBAR) state of Adobe Flash, and alternatives to Flash exist. Adobe Flash has had a very human cost in terms of stolen funds, identities, personal information, etc.

In future, I hope to call out those sites that still require Flash, in the hopes that it will just freaking die before more damage is done. I have two browser updates waiting for me on this system. Both are probably about Flash -- Google Chrome is making changes as well. Fine. Of the four Web browsers I use regularly, Chrome is the only one that can run Flash. If I approve it, on a case-by-case basis.

Aaaaaaand Now I Have to Go

Because Oracle (another purveyor of crap software) has just released their quarterly Critical Patch Update. http://www.oracle.com/technetwork/topics/security/alerts-086861.html.

There is at least one more important post in the Ars Technica flight of stories, but I have to defer that. Things just got busier. Thanks, Oracle.

Thursday, July 9, 2015

What and When means better support and software

Recommendation::Documentation
Audience::Entry
UUID: cd1d2cf8-266a-11e5-8834-00224d83fb0a

Finding help for a problem in the Open Source world often involves search engines, filtering out random rants, and all too often finding something that does not work, as it was only appropriate four years ago. Which is I put publication dates at the top of my infrequent posts.

This is a problem in my older notes files, of which there are many.
$ find $notes -type f | wc -l
1277
It's not uncommon to find something that dates back ten years. It isn't a horrible problem, as modification times are in the files, so it's easy to spot. And no, I am not going to put that whole hierarchy under version control. Currently, it's too much overhead, given the way that I use that hierarchy.

For the past several years, I've dealt with the applicability issue with a shell alias in my .bashrc.
daterel() {
    date; cat /etc/redhat-release
}
$ daterel
Thu Jul  9 10:21:16 PDT 2015
Fedora release 21 (Twenty One)
I just paste the output into the doc. Or the bug report, or whatever. I recommend it to clients who are sending me bug reports, and some form of it is pushed to everything in the lab. I say 'some form' because in certain situations it may make more sense to output on a single line, change the date format to seconds since the epoch, or whatever.

Some variant of /etc/*-release is available in most Linux distributions, not just the families related to Red Hat (CentOS, Fedora, etc.). It might take other forms, such as /etc/lsb-release. Or even /etc/issue. And no matter which side of the systemd controversy you might fall on, it has at least provided /etc/os-release. Though that provides 'ANSI_COLOR=' which gets very near to one of my pet hatreds.

It might even take the form of a command, such as  'lsb_release <options>'
http://www.freedesktop.org/software/systemd/man/os-release.html. I note that freedesktop.org has done a Bad Thing here, though, in that they have made CPE_NAME= optional. Common Platform Enumeration is a standard worth supporting, if you ever expect to be glueing the output of disparate
security information and event management (SIEM) tools together.

It's not really complicated

Really. To the beginner, it might seem that way, but it really isn't. 
  1. Search for files in /etc/, such as /etc/*-release. or /etc/issue
  2. Search for commands, via something like 'which lsb_release'
  3. Look at the results
  4. Spend about 10 minutes thinking about it
  5. Drop in a shell alias, or a quick script (in your $PATH, so it Just Works)
  6. Profit! Better docs, better bug reports
  7. There is no 7

Friday, June 12, 2015

Does the Navy Buy Vulnerabilities Too?

Commentary::Disclosure
Audience::All
UUID: e62e4ab8-ad7e-449d-9a4b-d2f2f2dd459e

This morning, I happened across this dead as I write this link. It goes to the FedBizOpps.gov site, and was original for Solicitation Number N0018915T0245, titled 70--Common Vulnerability Exploit Products. I happened to open it in another browser because I was curious about a rendering problem, which can be seen in the text below. I suspected it was due to the common mislabeling of content as charset=iso-8859-1 in HTML files.

By 2015-06-12 0715 PDT it was gone; a reload in that browser landed on a search page.  Back in the original browser, I saved a copy of the original solicitation as usn_exploit_request-1.pdf (229 KiB).

For a very few minutes it could be found by solicitation number from that search page, though the link presented did not do anything when clicked. That result became usn_exploit_request-2.pdf (131 KiB).

Within a very few more minutes it had disappeared from the search, and could no longer be found at all, by solicitation or title, even when the the search included both active and archived documents. I included archived documents purely because I thought that even though it was well before the original archive date, perhaps the request had been filled, and the document archived early. That result became usn_exploit_request-3.pdf (183 KiB).

It seems to have been simply deleted. There are many reasons that this might happen. Perhaps too many news sources had discovered it, it was causing an unfavorable reaction, and it was pulled for simple PR reasons. Though one takeaway from this is yet another lesson in not assuming that government archives are complete.

For those who don't want to look at PDFs, here is some of the relevant text, emphasized, with a bit of commentary from me.

This is a requirement to have access to vulnerability intelligence, exploit reports and operational exploit binaries affecting widely used and relied upon commercial software. 

In a bit, they become rather more focused of exploits than the defense side of things.

These include but are not limited to Microsoft, Adobe, JAVA, EMC, Novell, IBM, Android, Apple, CISCO IOS, Linksys WRT, and Linux, and all others. 

So, all of the most commonly-used operating systems, including mobile, an interest in storage (and possibly VMWare), and some common networking gear (including a wireless router commonly deployed in home, small branch offices, etc.). As well those long-time security horror stories, JAVA [sic] and Adobe.

The vendor shall provide the government with a proposed list of available vulnerabilities, 0-day or N-day (no older than 6 months old). This list should be updated quarterly and include intelligence and exploits affecting widely used software. The government will select from the supplied list and direct development of exploit binaries.

So, either 0-day, or at least not too stale.

Completed products will be delivered to the government via secured electronic means. Over a one year period, a minimum of 10 unique reports with corresponding exploit binaries will be provided periodically (no less than 2 per quarter) and designed to be operationally deployable upon delivery.

This qualifies as high volume.

Based on the Governmentâ€TMs direction, the vendor will develop exploits for future released Common Vulnerabilities and Exposures (CVEâ€TMs). 

An obvious flaw here is that not even remotely all vulnerabilities ever receive a CVE number. Assignment of a CVE number, to the extent that it has any effect at, would tend to decrease the number of vulnerable systems, shortening the useful life of the vulnerability that the Navy had just purchased. Naval armament apparently includes footguns. Also, here is that rendering flaw.

Binaries must support configurable, custom, and/or government owned/provided payloads and suppress known network signatures from proof of concept code that may be found in the
wild. 

Suppress is a poor choice of words. What they are after are exploits that don't present a signature that is already known to suppliers of Network Intrusion Detection Systems (NIDS). I am curious about why host-based antivirus and IDS (HIDS) isn't mentioned.

Innocent? Incompetent? Generic FUBAR?

This could be completely innocent; even an interest in 0-day or low n-day exploits may be an effort to provide their penetration testers with better tools. In the few contest between government employees and the private sector that I am aware of, feds of any stripe were trounced.

So, why was it pulled? Bad PR? Poorly written? Even a mistaken project approval? These are all possibilities, but it seems just as likely that it was a coordination issue. That could take a couple of forms. One is purely financial: duplicate efforts between government departments might well lead to the same exploit being purchased, perhaps from two different vendors. 

The second form involves operations. Suppose that the Navy is unknowingly using a given vulnerability against a target of value x. Meanwhile, some random three-letter agency is using the same vulnerability to collect against a target of value 10x. If the Navy were detected, and a NIDS signature is created, the random three-letter agency could lose access.

Whatever the reason, it is not a sterling example of government competence. Someone needs to go shine their Cyber or something.

Tuesday, May 26, 2015

Anti-tracking May Lower Temperatures, and It May Not Matter

Commentary::Reliability
Audience::All
UUID: c078319b-b156-404f-a48b-1e639dd734b6

Earlier today, in the midst of an ongoing project, I noticed that

  1. The temperature of a single physical CPU was running at 104° F; about 10° hotter than expected.
  2. There were a large number of Firefox tabs open (40-odd), as is typical when abnormal high temps are seen.
My first reaction was my normal knee-jerk: This is totally FUBAR. The extent of tracking of Web tracking creeps me the hell out, and long experience with hardware has led to a lot of exposure to the notion that increased temperatures lead to decreased service life.

Knee-jerk reactions seldom lead to any good outcome.

Step one was to take a quick a quick shot at verifying the problem. Since Firefox 35, we have been able to set privacy.trackingprotection.enabled=true in about:config. I had done that the day before, (before the problem was noticed) but had not restarted Firefox. This time I bookmarked all pages, restarted Firefox, and reloaded all tabs. Temps returned to normal. Though based on a single datum, I may be able to assign a provisional cause. Go, me! Possible progress! I did some ancillary things, such as noting before and after memory usage (in case the kernel scheduler was part of the problem), etc.

None of that really mattered, though. In the greater scheme of things, it seems likely to be irrelevant. At the very least, a lot more open research is needed.

Temperature

First off, the widely-taught inverse correlation between temperature and lifetime, may be entirely bogus over large domains, and seems highly likely to be far more nuanced than is often taught.

Perhaps it matters in, say, applications related to RF power systems, such as radars and electronic countermeasures, but I haven't worked in those fields in years. Though messing up fire-control radars was tons-o-fun. I care a lot more about, to use an overly-generic term, IT.

HPC centers, the hyper-scale service providers, and large enterprises, all care about bills due to power. Supply costs, conversion efficiency, what is devoted to heat dissipation, thermal effects on the longevity of vast fleets of servers, etc.

Google does not provide OpenSource code at anything like the rate that they consume it, but they do provide landmark papers, which is at least partial compensation. Failure Trends in a Large Disk Drive Population (2007) was such a paper, and it implied that increased temperatures enhanced longevity.
Temperature Management in Data Centers: Why Some (Might) Like It Hot (SIGMETRICS12, University of Toronto) extended those results to DRAM, set some boundary conditions, etc.

In the same year (2012) No Evidence of Correlation: Field failures and Traditional Reliability Engineering was published, but I have not digested that yet. It's corporate, and I've only recently discovered it. I'm interested in the intersection of security and traditional reliability engineering (it's the 'A' in the CIA security triad, after all) you might want to read it as well.

Obviously, this is nothing like a comprehensive literature search. But I really doubt that simplistic schemes purporting to draw an obvious inverse correlation have any merit.

Tracking

Without taking extraordinary measures, anyone using the Web is going to be tracked. Usually very effectively, because tracking was baked into the Web, from protocols to economics, from the start.

Unfortunately, this post has gone on for too long. Not in terms of what should be covered, but in what I have time to cover. It's 1915, there are still Things To Do, and it is already going to be a late night.

Some things are going to be left for a possible future post. I tend to want to leave this sort of thing to more consumer-oriented security sites, where 'Don't Run Adobe Flash' might possibly help someone. An obvious problem is that many of the consumer sites do not cover tracking issues, and some of those that do are either biased or intentionally misleading. That sucks, but it isn't as if I am going to write a definitive post, complete with an economic history, this evening.

Wednesday, May 13, 2015

Open Thread: Is There Any Point in a Security Blog?

Commentary::Internals
Audience::All
UUID: 1af6f74e-015a-4cc6-a668-181a083b1850

Earlier today, I published #101, since 2013-03-17. A bit of a milestone, I guess, though I don't pay much attention to that sort of thing; I totally missed #100.

It does bring up a bit of a question, though. Some time ago, I mentioned that I wanted the date of publication right up top, where viewers would immediately see it. Because information gets stale rapidly. Arriving on a blog post from a search engine, reading some lengthy post, and then discovering that it is five years out of date (if you can discover it at all) is FUBAR.

It is even more FUBAR if you consider how many servers may have been incorrectly configured because due to dated information, etc. This one of the several reasons that blogs, particularly security blogs, and most definitely this one, suck. They are little, if at all, better than security trade press.

Here's the thing. At 100 or so posts, I can still maintain a mental image of what I have written in the past. I can go back to previous posts and and post an update.

This will not scale. What's more, I have an idea of posting about common themes (things that the security community might do better) that might conceivably have a greater impact. If I were to become successful at that, the specific content of individual posts on a given topic (log analysis comes to mind -- I could go on about that) is going to blur together. Success at one goal seams likely to lead to failure at another.

But I can't really set aside a block of time each month month, and delete the old stuff. First off, time is scarce. Second, I will break links from more-recent posts to what has become background information.

A blog seems to not be an appropriate tool. A wiki, or a document series on GitHub, might be more useful. Or perhaps using this blog to announce revisions to either. The thing is, there is a critical mass at which a community forms, feedback is received and acted upon, etc. A rule of thumb seems to be that perhaps one in a thousand blog viewers will comment. This blog gets a few hundred visitors per month, so it seems unlikely that a critical mass will ever be reached.

Perhaps I am wrong about this, and I just needed to announce an open thread. OK. Done and dusted. I have my doubts, but the idea has to be given a chance, if only to give potential community members a voice in describing something that might better fit their needs.










A SOHO Router Security Update

Commentary::Network
Audience::All
UUID: 6cb54b70-6f80-4959-bb8b-c8d20fc07e93

In April, 2014 I published Heartbleed Will Be With Us For a Long Time. One point of that post was the miserable state of SOHO router security. I referenced /dev/ttyS0 Embedded Device Hacking, pointing out that /dev/ttyS0 has been beating up on these devices for years. If you don't feel like reading my original post, the takeaway from that portion of the post is as follows.
Until proven otherwise, you should assume that the security of these devices is miserable. I have private keys for what seems to be 6565 combinations of hardware/firmware combinations in which SSL or SSH keys were extracted from the firmware. In that data, 'VPN' appears 534 times.
The database was hosted at Google Code, which Google has announced will be shutting down. I am interested in the rate at which embedded system security is becoming worse (as it demonstrably is) and meant to urge /dev/ttyS0 to migrate, if they hadn't already done so. I wanted the resource to remain available to researchers. Google Code doesn't seem to provide (at least in this case) a link to where migrated code might have gone, but searching GitHub turns up four repositories. Apparently I am not the only person interested in the preservation of this work, and the canonical /dev/ttyS0 repository is still available.

/dev/ttyS0 also has a blog. Visiting that today, I find that they have recently been beating up on Belkin and D-Link. That's a bit sad, because in simpler times, I carried products from both of these vendors in my hardware case.

There is no room for sentimentality in this business. But there is room for keeping track of trends, and gazing into an always-cloudy crystal ball, trying to extrapolate trends, and spot emerging threats. Sometimes that is ridiculously easy; I hereby predict:

a) the Internet of Things will be a source of major security/privacy breaches in 2015 [1]
b) consumers will neither know nor care, in any organized manner
c) businesses will continue to buy 'solutions' that are anything but

In short, things will continue to get worse, at an increasing rate, as they have always done.

[1] I often tell a simplistic story (to non-practioners) about how I came to be interested in security and privacy, equating the two as a simple scaling matter. Privacy is security on a small scale, and the obverse. That is not actually true; there are technical differences, down to the level of which attacks are possible, let alone which matter. But that is a whole different post.


Thursday, May 7, 2015

Sharing is Complicated

Commentary::Internals
Audience::All
UUID: bd74c00b-02cd-42b4-8d62-514dfab4b217

There are a lot of things I want to share, from images to code. Roadblocks are often unexpected, and can be weird as hell e.g. file-naming issues with my camera that began at the same time that I modified the copyright information that is stamped into EXIF data. The solution to that probably involves adopting something like the UC Berkeley calphotos system http://calphotos.berkeley.edu/, and writing a bit of code to support a new pipeline. Also known as a workflow, and which term is used is suggestive of many things. But I digress. Most popular articles (and at least some software) related to how to image storage and retrieval are overly simplistic. Duh. In other exciting news, the Web has been found to be in dire need of an editor.

Sharing documents (specifically including code) is also an issue, and one that is a bit more important to me at the moment.

I don't want to get into the version control Holy Wars. Use git, mercurial, subversion, or even one of the proprietary systems. Whatever. If I had to guess, it would be that how well you can use the tool will in most cases outweigh the power (and idiosyncrasies) of the tool.

That said, this is about github, because this post is about sharing.

Github suffers, periodically, from DDoS attacks, which seem to originate from China. I say 'seem to' because attribution is a hard problem, and because US/China cyber-whatever is increasingly politicized, and this trend is not going to end any time soon.

Points to Ponder

a) Copying of device contents as border crossings are made. There have been State Department warnings on the US side of the issue, but at least one security actor, justly famous for defeating TLS encryption with the number 3 (that is not a joke, search on Moxie Marlinspike), has been a victim as well. There is some question as to whether my systems could be searched without a warrant, due to my proximity to an international border. Nation-states act in bizarre ways, the concepts of 'truth' and 'transparency' seem to be a mystery to national governments, and I do not regard it as impossible that the US would mount a DDoS on GitHub, if a department of the US government thought it both expedient and deniable.

b) Is China a unitary rational actor? On occasion, acts of the Chinese government seem to indicate a less than firm grasp of what other parts of the government are doing. A culture of corruption is one issue, but there are others, such as seeming amazement at adverse western reactions to an anti-satellite (ASAT) missile test back in 2007. Which was apparently quite the surprise to western governments, and makes me question what all of this NSA domestic surveillance effort really accomplishes. I won't digress into that can of worms, other than to note that there is much evidence suggesting that the US may not be a unitary rational actor, either.

Circling Back to GitHub

The entire point of a distributed version control system, of whatever flavor, is availability. Yet there are trade press stories dating back a couple of years, at least, about widespread outages due to DDoS attacks. The most recent one that I am aware was in April of this year. In every case, much panic and flapping of hands ensued. Developers couldn't work. Oh noes!

That rather blows the whole point of GitHub out of the water, doesn't it? The attacking distributed system beat up on your distributed system. Welcome to the Internet Age, cyber-whatever, yada yada yada. Somewhat paradoxically, a good defense involves more distribution, and not allowing GitHub to be a sole point of failure.  

The problem is pipelines. Or, again, workflows. A truly resilient system needs more than something that has demonstrably had accessibility issues for years, and the problem is two-fold.

1) There is no fail-over.
2) The scripts that drive it all tend to be fragile.

It is entirely possible to build a robust system, hosted in the DMZ or in the cloud, as a backup to GitHub. Most of this is just bolting widely available Linux packages together, and doing the behind-the-scenes work. With an added component of writing great doc; the system will only be exercised when things have gone to hell, and everyone is stressed. If there were ever a time where great doc were a gift from $DEITY, this would pretty much be it. Because Murphy is alive and well, so some periodic fail-over test (you do that, right?) probably got skipped for some reason.

At this point I am going to be polite and just mention that the DevOps community might do a bit more work in getting some best practices information to the public. If GitHub is more important than just free hosting (and it may not be, for completely valid reasons) please build an adequate system. It will save you from having to publicly whine about how your distributed system did not turn out to be resilient.


Monday, April 20, 2015

Exploring System Data: Use Anything but bash.

Recommendation::Language
Audience::Intermediate
UUID: 4e163e7c-ec63-430e-83e2-605e9df95526

In a gmail conversation related to changes to the Linux kernel, I asked whether anyone still used gnuplot, which was used in the example. Because one of the first things you do when exploring data is to look at the distribution. Duh.

Of course, I am sure that gnuplot is still in constant use. People don't scrap production systems simply because something is more fashionable. Or they shouldn't, anyway. The math is not favorable.

As a side note, I really need to take a decision on how I want to display math on this blog.

I started a project related to data analysis using some old-school techniques, all based around shells. Shells can be a win, for answering questions such as, "How is this new application changing my system." That can be important. I've seen Web application servers deployed before the location and content of log files was known, much less characterized at a level of, "What sensitive information might be written if the log level is DEBUG?"

Shells are fine for that sort of fast initial cut. The problem is people don't want to throw that code away. They keep writing one more grep statement, or whatever. My personal alarms tend to ring at arrays. If the system becomes complex enough that I need arrays, I am going to question the wisdom of doing it in a shell.

  • They aren't POSIX, so you become wedded to one particular shell. Want to use dash instead of bash? Sorry, but you can't.
  • You can't pass arrays to functions, if you need to do something more complex than loop over them. Even for that, you are probably going to use a reference. Modify them? Sorry, but you can't.
  • You can't even take a slice of an array. Sorry, but you can't.
  • What stop-gaps exist for dealing with arrays, or even faking them if they aren't available in your shell, tend to use 'eval'. Which adds a whole new layer of potential security issues. Sorry, but you really shouldn't.
Shell arrays don't do anything more complex than map strings to integers. Except in the case of bash associative arrays, which are a newer, shinier, and deeper can of worms. The point is that the most advanced data structures available in shells are not really suitable for building a software with any sort of complexity.

I pretty much won't start down that slippery path, any more, and I hope that you won't either. I tend to use Python. If you prefer Ruby, have fun. It's just too slow for me, but it's also widely used in the security community, including in the Metasploit framework.

There is value in knowing pretty much any language, especially in the security field, if for no other reason than to know how problems with them can be exploited. That is not an argument for falling into the same trap of eval'ing something, wedding yourself to sanitization problems, because you pushed the language too far.

2015-04-23 Addendum

The power of the shell is seductive. I still use it all the time. Moments ago, on a Linux machine:

# ls -lh /var/log/secure
-rw-------. 1 root root 8.3M Apr 23 13:24 /var/log/secure
# wc -l /var/log/secure
69397 /var/log/secure
# grep hddtemp /var/log/secure | wc -l
69319
#

But this is not a mechanism for monitoring log growth. I can immediately see log file size, that (non-SELinux) permissions are correct, and that this one is mostly about monitoring a drive temperature. 

The problems will surface when I try to reuse these commands: add -Z to /usr/bin/ls will show me the SELinux context, find lines that aren't about hddtemp, etc. But in scripts, to start with, you should not parse the output of /usr/bin/ls. stat(1) is your friend (and don't forget to supply, and appropriately quote, a format string).

The shell gives you a lot of power to spot-check things. Leave it at that, and save yourself some grief.







Friday, April 17, 2015

If Leave the ACM, Some of Their e-Mail Becomes SPAM

Really, people. If I tell my rep that I will not be renewing, most renew-now messages stop. This is not the case with with the list servers. The ACM Bulletin, TechNews, and whatever else you may be subscribed to will continue on their merry way.

At a certain point, this becomes spam. If, after all, I regarded those lists as being extremely valuable, I would likely never have left ACM in the first place.

Just saying.

There's a lot going on right now. which is how it comes to be that my first post of the month is on the 24th. Over a month since my last post. So I have to regard ACM mail as something that should have vanished at the end of last month. Just random stuff that I have to send unsubscribe messages to.



Friday, March 20, 2015

OpenSSL Is No Reason To Go All Twitter

Recommendation::Crypto
Audience::Intermediate
UUID: b3ae8f36-426c-4b6c-9464-19033c6808e5

Must...resist...the Power of the Force.

I have never been so tempted to post a few very snappish things that really could be effectively done in 140 characters. Security drama marketeers that were hoping for another major flaw in OpenSSL yesterday, instead of a DoS attack, etc.

On Twitter, security seems to be all about teh drama, and I am on record that Drama Indicates FAIL.

OTOH, OpenSSL does deserve come comment. It is so widely deployed that it might justifiably be regarded as Critical Infrastructure, though that term is also drama-bait. Cyber-attacks. a) Oh noes, run in fear, or b) evaluate it in terms of your threat model, and make rational decisions. I am big fan of b.

It turns out that there is a very good cheat-sheet for OpenSSL. Ivan Ristik has published a revision of OpenSSL Cookbook. It isn't exactly how I would would have done it, but then Ristick has absolutely no need to emulate some random guy that gets a few hundred hits per month. Because Ivan Ristik, who is a major talent. You have to register to get it in one of several formats, but it is a worthy update. You can also download Apache Security, and Modsecurity Handbook after registration

It does lack a few things, such as an explanation of compiler options, which are pretty much out of scope for a brief overview of the high points. And the openssl speed -evh command-line option will not have any effect on at least some Intel Ivy Bridge CPUs. Though -multi (n), which tells 'openssl speed' how many cores to use very much will. In my tests, it scales in a very linear fashion, as expected. I still have to do plots of cores v temp. Maybe next week.

I note that speed(1), on my system, does not document all command-line options. So, for instance, not knowing about '-multi (n)' will cost you a verification test.

TODO: update the OpenSSL Position Paper.













Wednesday, March 18, 2015

(Some) Books That Seem Important To Me

Commentary::Personal
Audience::All
UUID: 796fa48c-2be5-4e09-a181-a3a3c00bc4a0

Have an image of a stack of books. They are all worth writing about, in contexts that may be surprising. That one on the bottom? SPAM NATION was a recommended purchase in Just Buy Spam Nation. It became a best-seller, which had nothing to do with my efforts (this site only gets a few hundred hits per month), but because Brian Krebs rocks, in terms of consumer security. Which is why I recommend his site. Pardon me, but I seem unable to force my fingers to type 'Blogroll'. It ranks right up there with 'Blogosphere', in terms of suckage.

I mentioned some of these in Four Books on Order, back on March 9.



Followers (both of you) may note that I am now including an Audience identifier, and a UUID. More on that in a future post.


Tuesday, March 10, 2015

Namespaces Continue to Annoy Me

commentary::namespace

I do not know who this guy is, but I dropped this into a quotes file long ago. Because he obviously had a better handle on the situation long before I did.
There are only two hard things in Computer Science: cache invalidation and naming things.
-- Phil Karlton
 There are a couple of other things that I cannot really validate, related to personal names. Such as an ancient note reminding me that a full name can consist of a single ASCII 'a' (doubtless transliterated)', which can occur in Indonesia. That note is really old, does not include a source reference, and I am sadly lacking Indonesian friends.

As a personal aside, I have to mention that you might be sad too, if you both knew how wonderful Indonesian cuisine can be, and lacked a source of ethnic Indonesians friends to mooch off of. That is pretty sad state of affairs, but I digress.

A 2010 post listing 40 potential errors related to just personal names, opened my eyes, and not just to the current madness I am contending with. I don't know the guy, but was impressed enough to drop into the reference system. Falsehoods Programmers Believe About Names is still entirely relevant.

What makes it truly FUBAR is that this doesn't just touch on security fundamentals. It goes to the roots of how authentication and authorization is done. In my experience it is easy to find errors related to this problem, to the extent that it gets a bit boring. So, all you SysAdmins, DBAs, Web developers, etc., please take note

Also, please do not forget about multi-byte character representations v ASCII. There are a lot of problems with libraries that lead to issues with sanitizing input. The world thanks you in advance.

Knock-on Effects of This Problem, as Related to Policy

  1. It can effect the usefulness of the entire concept of policy. Requiring username standards such as firstname.lastname can become silly, and be easily seen as silly. Breeding contempt for policy is probably not your goal, so please do not do this.
  2.  The effects of item 1 require weird workarounds for the people in the trenches, doing the admin work. Policy flaws have now propagated from users to admins. This is not a win.
  3. The combination of 1 and 2 can build into a situation where it is is impossible to audit who has access to what. As different groups will establish different workarounds, recovering from a breach becomes more difficult. That is pretty much the last thing you want.
  4. Even minimal security training for new employees becomes difficult, as you are effectively indoctrinating them in the belief that security policy is something to be circumvented. 








Monday, March 9, 2015

Four Books on Order

Commentary::Personal

Now and then you have to blow a hundred bucks or so on books. A Safari subscription at O'Reilly subscription can save you quite a bit on professional expenses, but at the end of the day, you often have to cough up some additional cash.

Today, the total was 4. One does not count: The Hydrogen Sonata, by Ian Banks. Pure entertainment.

So what does count? The following three.

  1. Hackers - Steven Levy. I am looking for support for my argument that the crypto wars never ended. The NSA would then be a continuing chapter in that game, as described very well by any Bamford work you would ever care to read. _Hackers_ is on my Safari bookshelf, but that is not the same thing as being able to refer to page numbers in the original edition.
  2. How Learning Works: Seven Research-Based Principles for Smart Teaching - Susan A. Ambrose. Widely acclaimed, and we damned sure need better methods of teaching security. Or any other subject, for that matter.
  3. Capital in the Twenty-First Century - Thomas Piketty. This book has already had enormous press, so I won't write much about it here. I will mention that I regard economics as a highly-politicizied proto-science, at best. But without bringing economics, in whatever state, into the mix, neither security practioners or researchers can really have much much effect. 

Wednesday, March 4, 2015

Timeframes: Immediacy Trumps Traditional Academia

commentary::internals::blog

The time has come to leave the ACM. So those side-bar links will be going away. I am a security practitioner. I don't regard what I do as primarily about software engineering, or computer science. It touches those fields, as well as statistics, visualization, {systems, network, database} administration, compliance, and much else. But this is mostly about bandwidth, and the ACM does not currently represent an optimal use of an always-scarce resource: time. Staying informed, in the security field, is a hard problem. Just as it is in any other technical field; we are not special snowflakes.

The ACM has annoyed me a few times, and I'll mention a bit of that. But I will not use the current "Let me be clear" phrase. I only need some modest amount of skill in written communications to be clear, not the permission of an audience. If you interpret this post as a rant, I will have failed. Failure sucks, but not as much as failing without knowing it. Comments are welcome, not least because I may have totally missed the boat on this, and insight from someone I have never heard of might completely change my view. The Internet is useful for more than cat pictures.

First off, here is one case (there are others) that the ACM makes. These are notable people, and they are all in favor.

Bryan Cantrill, Vice President of Engineering at Joyent, Ben Fried, Chief Information Officer at Google, and Theo Schlossnagle, Chief Executive Officer at OmniTI, discuss motivations and benefits of joining the Association for Computing Machinery (ACM).


A short watch at 2:45.

I am not familiar with OmniIT, but this is an indication that I should probably should fix that. Joyent employs Brendan Gregg, whose performance work will likely enable more practical security work than many realize. And of course everyone knows something, pro or con, about Google.

There are other people whom I respect quite a bit, who have written for Communications of the ACM. I will be linking directly to them in future, and I'll write about exactly why in future posts related to commentary::internals::blog.

So why would I not be renewing my ACM membership? Again, it is all about bandwidth. These people are all CEOs. They have fiduciary responsibilities, hence broader concerns, such as access to well-rounded software developers at going labor rates, media perception, etc. I have only one concern: achieving a security posture commensurate with risk.

Let's take one SIG I belonged to as an example. SIGSAC (Special Interest Group on Security, Audit, and Control). For those of you who might not be familiar with ACM SIGs: perhaps you have heard of SIGGRAPH, the graphics Conference That Got Big. CGI in movies, etc. Huge impact, because Media.

Now, back to security, which has almost no impact, despite all the data loss. Let's look at a couple of papers presented at the fourth edition of the ACM Conference on Data and Application Security and Privacy (CODASPY 2014). These are both interesting papers, in that they might have important near-term implications.

Automated Black-box Detection of Access Control Vulnerabilities in Web Applications
KameleonFuzz: Evolutionary Fuzzing for Black-Box XSS Detection

But unless I missed it, which is always possible, neither paper gives a location where you can simply go get the code, and begin experimenting. That seems a bit out of touch with the times, where fuzzing software is commonly described in other fora, and code is readily available. Much like the IETF does business, running code trumps whatever paper you might care to write, if you care to have an impact on the (rather larger) non-academic world.

That is where the people in the security trenches need to play with the code, form conclusions as to whether it is immediately useful, or how soon it might be useful, in terms of stability, performance penalties (nothing is really free-as-in-beer), and think about budgets.

This is the bit that might be perceived as a ranty bit. Again, it is not intended that way.

I have to mention that ACM ships disks of conference papers. I am sure that they regard that as a benefit of membership, but their disks include autorun files. Given the vast history of Windows system compromise via autorun, this is more than somewhat ironic. Particularly in the case of SIGSAC, where baldy stating why there is no autorun, and the lengthy list of system compromises powered by autorun, would be educational. No, research and teaching are not the same thing in academia. But this is just silly; the sooner any benefit provided by autorun vanishes, the sooner security practioners might actually succeed in getting people to never, ever, enable it. Frankly, there are major dumbass points to be awarded on this one, and I do not thank SIGSAC for making my job harder, and charging me for the privilege.

Another item is that some of the benefits might not be all that one would expect.

  • The selection of technical books is much smaller than what is available from the O'Reilly Safari service.
  • The Tech Packs are subject to doubt. I submitted extensive flaws in basics, such as broken links, in the Security Tech Pack, and those were repaired. However, nothing was updated. Particularly, there is nothing regarding security economics beyond one very old paper, despite much work done more recently. This is not a membership benefit.