FAIR-CAM is pretty interesting. It cuts to the heart of the matter of IT risk in a lot of ways, and succinctly poses and addresses core questions that brought me to help build the SiRA fount—and drink from it—so long ago.
This first supposition may or may not be true, but it seems true in my experience.
Since the value of any cybersecurity or risk management control boils down to how much it reduces risk, we have to understand which loss event scenarios a control is relevant to, and how significantly the control affects the frequency or magnitude of those scenarios. This is not typically part of the evaluation process when cybersecurity programs are evaluated using common control or maturity frameworks, which means the value of each control isn’t determined.
“This is not typically part…” indeed. I would take this further to say that the common frameworks in use don’t have a way to include this information, even if you wanted to.
This second supposition wouldn’t receive as much argument in most circles.
Without knowing the risk-reduction value of its controls, an organization may inadvertently invest heavily in one or more controls that aren’t particularly relevant to, or effective against, the risks it faces. When this is the case, the organization would have high scores for those less-relevant, less-valuable controls. For the same reason, the organization may under-invest in more important controls, which would result in lower scores for those controls. Organizations also sometimes invest more-or-less equally in as many controls as possible, which invariably results in under-investment in some controls and over-investment in others.
This leads to a few questions: is it more cost-effective to take this approach, or are we better off to assess/determine the value of controls (or at least the ones we believe to be most valuable) to prove the ROI.
No arguments on this one either, though I question the example:
All controls have relationships with, and dependencies upon, other controls, which is not accounted for in common control frameworks. As a result, weaknesses in some controls can diminish the efficacy of other controls. For example, the efficacy of an organization’s patching process is highly dependent upon the efficacy of the organization’s vulnerability identification capabilities, as well as its threat intelligence capabilities, and its risk analysis capabilities. If one or more of those capabilities is deficient, then the efficacy of patching will also be affected.
I agree with the general concept, but I can easily envision an IT department that patches everything ASAP. In that case, vulnerability identification and threat intelligence don’t matter much, beyond getting notified of the patch release of course. So these reduced capabilities may or may not affect the efficacy of patching, but they certainly do impact the efficiency.