Tuesday, July 25, 2017

Private vs Public Vendor Vulnerability Disclosures

Yesterday I had an interesting Twitversation with Michael Toecker (@mtoecker) about vulnerability disclosures for distributed control systems (DCS) a type of industrial control system apparently frequently used in power generation facilities (and a number of chemical manufacturing facilities). Apparently one major DCS vendor (Emmerson) does not publicly report their DCS vulnerabilities (via ICS-CERT for example), but relies upon private disclosure to system owners.

The conversation started when Michael tweeted that “Ovation has ~51% of the generation DCS market”. I had never heard of Ovation (not terribly unusual) so I looked it up on the ICS-CERT vulnerabilities page and could not find any listings of any vulnerabilities. I asked Michael about that and he replied: “They have their own cert for Ovation Users.” And the conversation went on from there and well worth reading aside from this post.

Which brings up the whole issue of vendors fixing and then communicating about security vulnerabilities in their software (which is different than the whole coordinated disclosure debate). I cannot remember discussing this particular issue in any detail before so now seems like a good time.

Mitigation Decisions


Software vendors have been dealing with fixing program bugs forever. They have developed techniques for identifying problems (including outside ‘help’ from researchers), fixing them and then getting the fixes into the hands of consumers. Some are better at it than others.

For industrial control system owners, the fixing of software (lumping in firmware and even some hardware issues here) problems is a bit more problematic than with a standard desktop software issue. The system needs to be taken off-line for some amount of time which requires a shutdown of production. The ‘update’ may cause unknown interactions with other control system components that interfere with production. And finally, the owner may not have anyone ‘on staff’ trained to deal with the above issues. So, the decision to apply a software fix is a cost benefit analysis that frequently results in a ‘if it ain’t broke don’t fix it’ response.

For a security related issues the ‘cost benefit analysis’ is even more difficult. The cost analysis remains the same, but the benefit side is much more difficult since it deals with risk analysis. The cost of potential failure has to be modified by how likely is the failure event to happen. Where no failure history exists (no attacks here) that probability is really difficult to determine.

That is especially true if there are no in-house cybersecurity experts to help make the decision. This is where the system owner has to rely on the information provided by the vendor (and/or system integrator) in describing the vulnerability that is being fixed by the most recent update (patch, new version, etc). A detailed description of what could go wrong, what an attacker would need to successfully exploit the vulnerability and other potential mitigation measures that could reduce the risk will greatly assist the asset owner/operator in making a realistic risk analysis.

Vulnerability Reports


In a near perfect world (no vulnerabilities in a ‘perfect world’) a software engineer from the vendor would call up the control system engineer at the user site and have a detailed discussion of the discovered vulnerability, the fix applied in the latest update, the potential interactions with other systems in use and the probability that an attacker could/would use that vulnerability upon that particular user. That is not going to happen for a whole host of obvious and not so obvious reasons.

In a less perfect world, the conversation would be replace by a detailed written report from the vendor describing the vulnerability in great detail, how it would affect operations and interactions with all probable other devices and software with which the product could be expected to interact. It would also include a discussion of the threat environment in which the product existed, with a report on the history of known/suspected exploits and the potential for exploits in a variety of customer environments.

Again, not going to happen. Too much time and expertise would be required to develop such reports that would also end up disclosing too much proprietary information. And, probably more importantly, they would never actually be read by the owner/operator.

In the real world, what happens is that a brief report (one to two pages) is prepared describing the vulnerability, who it might effect and the potential consequences of a successful exploit. To make the preparation and subsequent analysis of the report easier, a set of standard descriptions is developed and used in standardized report format. Not as much information would be provided, but that which is provided is more accessible and more likely to be used.

Vulnerability Communication


Now, not all vendors have the staff necessary for the development, publication and dissemination of these reports. Instead, they will rely on various computer emergency response teams (CERTs) to provide the communications. A vendor engineer will communicate with a CERT engineer to provide the necessary information and the CERT will write the vulnerability report. Frequently, but certainly not always, the individual who discovered the vulnerability will be involved in providing information to the CERT.

The decision then has to be made as to how the vulnerability report will get into the hands of the owner/operator. Where the vendor/CERT has contact information on all the owner/operators of the affected equipment the report can be directly communicated to them. Where the vendor/CERT does not have that contact information then the only way to get the information to the owner/operator is via public communication of the report.

Public disclosure has a couple of problems associated with it. First it is a public admission by the vendor that a mistake was made in the development of the product; something that the sales department does not generally want to tell potential customers. Second, it substantially increases the number of people that know about the vulnerability, thereby increasing the risk of potential attempts at exploiting the vulnerability.

Typically, the former problem is dealt with by the vendor/CERT first distributing the vulnerability reports privately to those customer with whom they are in contact (generally larger customers), allowing some reasonable time to lapse to allow those customers to remediate their systems and then make a public disclosure to the remainder of the customer base.

Oh, and that first problem? Sales is told to suck it up. After all, the other vendors in the market place (especially the big ones) are publicly disclosing their vulnerabilities, so it is not really an issue.

Public Disclosure Benefits


So, are there benefits to public disclosure that might suggest that it is a good alternative even when customer contact information is available? Actually, there are a number. First, and personally most important, non-customers get a chance to look the disclosure reports and provide their two cents worth in the risk evaluation process. Gadflies, like yours truly, get a chance to provide an outside quality control process to the vulnerability disclosure process to ensure that owner/operators have as much information as practical about the vulnerabilities.

Second, outsiders to the communication process have some access to the vulnerability information. This includes folks like investors, corporate management and yes, regulatory agencies. These are the folks that have a vested interest in ensuring that the proximate decision makers at the owner/operator are making reasonable decisions in their cost-benefit and risk analysis calculations. If they do not know about the existence of the vulnerabilities, they have no way of asking questions about the implementation of those processes with respect to those vulnerabilities.

And, last but not least, researchers in the field get a chance to see what types of vulnerabilities other researchers are finding (and ethically disclosing) and how vendors are dealing with those vulnerabilities. This provides some incentives for ethical (coordinated, or whatever current term you want to use) disclosure and it provides for a robust research community that has a source of fresh ideas about what types of vulnerabilities for which they should be searching.


Needless to say, I am a fan of public disclosure.

No comments:

 
/* Use this with templates/template-twocol.html */