Monday, September 14, 2015

Risk Transfer Failure: a Possible Example

Commentary::Risk
Audience::Intermediate
UUID: 2f7d104e-1066-4735-a1a7-c1c0f39882f4

In classic risk analysis, if such a thing can be said to exist, there are four categories of risk response.

1. Mitigate
2. Avoid
3. Transfer
4. Accept

where Transfer means transfer part or all of the risk by e.g. insurance, hedging, outsourcing, or partnering. This category is sometimes split into Transfer and Share, giving us five categories.

There are many things that can go wrong in assigning the probability/consequence numbers that drive which response category is chosen. But let us make the completely unwarranted assumption that those are all solved problems, and apply our imperfect solution to the risk of the loss of medical records. The choice to transfer risk is usually made when that risk is evaluated as low probability, but high consequence.

One assumes that this what Cottage Healthcare System did when they chose transfer the risk, via insurance, to Columbia Casualty Company in the form of a "NetProtect360" policy, as they are subject to HIPPA regulation, which amongst other things, requires a risk analysis.

Then, in 2013 32,500 records were stolen, and in a class action, they were sued by their customers for $4.125 million, which Columbia paid. Risk transferred, job done, mission accomplished, right? Well, no.

It turns out that when Columbia paid off on the policy, they reserved all rights, including the right to seek reimbursement. There was an ongoing Department of Justice investigation, which was entirely appropriate, as HIPPA is federal law, and it turns out that Cottage had done some things that were perhaps unwise, to put it mildly, such as placing those records on an Internet-facing anonymous FTP server. Given the current state of network scanning, which is best described as 'constant', this made the actual risk high probability, not low. It also became visible to Google, which made the breach a virtual certainty.

In applying for the policy, Columbia required a "Risk Control Self Assessment", which included such questions as "Do you have a way to detect unauthorized access or attempts to access sensitive information?" and "Do you control and track all changes to your network to ensure it remains secure?", to which Cottage ticked "Yes", and which then became part of the terms of the policy. Access via anonymous FTP is obviously not compatible with either of these.

HIPPA Bites Again

And now we are back to the DoJ investigation. The risk analysis required by HIPAA contains questions such as the following.

  • §164.312(a)(1) Standard Does your practice identify the security settings for each of its information systems and electronic devices that control access?
  • §164.312(a)(2)(i) Required Does your practice have policies and procedures for the assignment of a unique identifier for each authorized user?
  • §164.312(a)(2)(i) Required Does your practice require that each user enter a unique user identifier prior to obtaining access to ePHI?

where ePHI means electronic Protected Health Information. Access via anonymous FTP is obviously a complete non-starter, and Cottage would seem to have violated Code of Federal Regulations Title 45. See 45 CFR 164.312 -Technical safeguards.

We Now Have a Partial Risk Transfer Failure

At the very least, legal fees are mounting. Columbia sought reimbursement in United States District Court, Central District of California. The case was dismissed without prejudice (meaning the issue can reappear) in July, because the policy also allows for resolution by mediation, and both parties have decided to pursue that approach first.

We may never know the degree to which risk transfer failed. Cottage is a not-for-profit, so some information may turn up in financials - a Form 990, or it may fail, and reappear in the courts. But mediation might drag on for quite a while, and I could miss the resolution.

I'm curious because at the moment public information on examples of risk transfer failure is rare. I expect that to change: insurance companies moved into this market some time ago, and they are growing tired of paying out potentially large (Target had $50 million in coverage, which may be insufficient) settlements, particularly where failures on the part of the insured seem provably fundamental and complete.

Takeaways

It seems possible, even likely, that was a rogue server installation. It happens: I've found an Internet-facing rogue myself, which contained sensitive information on a new product. The cause seems to generally be that there was some sort of perceived business emergency, it was supposed to be temporary, and there was a disconnect between operations and security. 

1- Compliance with the terms of an insurance policy must be included in security operations.
2- Whether you have transferred risk or not, pay attention to the fundamentals. Scan your network, and analyze traffic logs.

Also, while it isn't a takeaway from this specific example, be aware of the possible existence of so-called 'Shadow IT' within your organization. The server that bites may exist not in your DMZ, but in the cloud, funded by credit card. Administrative controls fail at least as often as technical controls.

No comments:

Post a Comment

Comments on posts older than 60 days go into a moderation queue. It keeps out a lot of blog spam.

I really want to be quick about approving real comments in the moderation queue. When I think I won't manage that, I will turn moderation off, and sweep up the mess as soon as possible.

If you find comments that look like blog spam, they likely are. As always, be careful of what you click on. I may have had moderation off, and not yet swept up the mess.