Cybersecurity Architecture

Building a risk-based cybersecurity architecture

I have recently joined the Cybrary Mentorship Program. I have really enjoyed my time thus far, providing students across the globe with a few insights from my time in the field working as a CISO, architect, designer, and systems engineer.

I’m helping the guys at Cybrary with their CISO-based syllabus and recently participated in a recorded discussion for Ed Amoroso’s ’12 Competencies of the Effective CISO’ series.

 

Many of you have asked me about my book, Cyber Risk Management: motivations, content and some of my thoughts on the use of contemporary technologies for cyber defence. Well, here goes!

The risk equation hasn’t changed

Let’s start at the top to demystify some of the confusion in our industry. Newsflash: Likelihood x Impact = Risk. For all the zero-day vulnerabilities, nation-state actors, and ‘next-gen’ controls, the mechanisms for assessing a company’s cyber exposure have remained fairly static over the past 20 years. Risk is determined by qualifying how likely it is that business disruption will occur, along with the impact that said disruption would bring.

In the book, I cover several cyber risk management frameworks which decompose the aforementioned macro risk equation into something more practically applicable to a domain (namely, cyber) where technology changes as regularly as the English weather.

Frameworks provide risk practitioners with a guide, a set of building blocks to approach risk management and ensure that the salient requirements for qualifying a company’s exposure are considered. Frameworks and methodologies do not absolve a company of responsibility in qualifying (and perhaps laterally accepting) risk, but they provide a blueprint; a map of what to consider and how.

Piecing things together – practical and pragmatic

I’ve spent 18 years getting by with the following premise for information and cybersecurity protection: threat actors will initiate nefarious and accidental events which will exploit vulnerabilities. These vulnerabilities may exist inherently within the technology ecosystems in which we operate, although it is also prudent to consider the unintentional actions of trusted actors (employees, suppliers, consultants).

CISOs continually tell me that it’s easier to identify the malicious activity of an untrusted entity – these actors are without privilege in your environment and all their activity should be considered suspicious. With an employee or third party, access is necessary to perform vital business functions – it’s harder to know what ‘bad’ looks like!

I often hear that vendors need to get better at removing vulnerabilities from their software/hardware. The rationale being simple: If software was developed void of any vulnerabilities before release, attackers would have a fairly torrid time finding a viable route to ‘action on their objectives’ – a colloquialism from the book to describe the denouement of malicious activity.

Generally-speaking, cybercriminals are interested in stealing data, affecting the availability of a service or, more recently, causing physical damage through cyber-initiated means. Unfortunately, pressures surrounding first-mover advantage mean that no software is ‘vulnerability proof’. A modern operating system, for example, can run into tens of million lines of code. It’s utterly impractical to suggest that each ‘bug’ could be found during a quality assurance exercise.

It’s about people, process and technology

How many times have you attended an industry conference and heard the truism that ‘cybersecurity is about people, process and technology’? Well, it’s an essential consideration in the application of cybersecurity controls not to mention qualifying the significance of vulnerabilities. Controls must act as measures to materially reduce a company’s exposure to cyber risk.

Please pause for a moment and consider why the modern enterprise requires hardened workstation builds, endpoint detection and response (EDR), firewalls and proxy gateways. If asked by your c-suite why they need to approve millions of dollars on cybersecurity spend, could you credibility articulate why a Data Loss Prevention systems was a high priority? Cybersecurity controls are expensive and must be delivered to avoid, mitigate or transfer cyber risk. I recommend that CISOs work ‘left to right’ on their qualification of tooling.

On the left-hand side we have business objectives; work with your business stakeholders to define what is important to them. Generally speaking, if it’s an initiative which is covered in a company’s annual business strategy or quarterly earnings call, there’s a solid chance it’s a strategic business objective.

New Call-to-action

From here, think about all of the risks associated with said objectives – anything that could materially impact a company achieving what it set out to do. If, for example, a company considers ‘improving customer retention and market position as a brand of quality and integrity’ a very real risk would be the widespread loss of customer data through a data breach.

Once we have the traceability of objectives to risks, we can start to think about the controls necessary to reduce or remove the impact or likelihood of such an event occurring.

My day job affords me the privilege of attending industry conferences the world over. I often find myself walking the vendor halls, at times it feels like we’re listening in stereo; vendors from all corners latching onto the latest technology buzzword: Machine Learning, EDR, User Behaviour Analytics all seemingly touted as a security panacea – a mechanism for absolute security.

The value these technologies bring is undeniable but must be contextualised. For example, machine learning can address the ‘cyber skills shortage’ which were frequently told is getting worse. Let us remember the intrinsic association between technology and people – although automated methods of identifying suspicious files, or provisioning firewall rules will remove resource burden at the first line, there will (for now) always be a need for a human element (the conductor of the orchestra) to triage and sanity check.

User Behaviour Analytics is a profoundly important tool for pin-pointing insider threat activity, but companies must employ robust playbooks to initiate a response when something suspicious occurs. Technology in isolation, automated or otherwise, is not going to solve the cybersecurity problem.

Controls fail

‘Layered cyber defence’ is another industry phrase which garners attention. What does it mean, and why is it important? Designing a layered cybersecurity architecture is prudent for myriad reasons. First and foremost, controls fail.

Your anti-virus solution may unintentionally categorize a malicious file as benign; a web proxy might have a malicious C2 server omitted from its IP/URL blacklist. This, for me, is an issue with control efficacy. We must also factor in the operational failure of systems. Disks and network cards die, and systems sometimes reboot due to system upgrades and software dependency problems.

Companies require coverage of controls to prevent, detect and respond to a cyber-attack. I believe that our industry spent approximately 15 years with prevention as the primary recommended control category. Anti-virus and firewall technologies aimed to identify malicious payloads through the concept of reputation; essentially knowing before an attack of the existence of the said malicious file.

Unfortunately, the techniques of cybercriminals evolved, and methods were developed to evade these signature-based defences. Prevention began to be seen as futile. Whereas most subscribe to the adage that an ‘ounce of prevention is worth a pound of cure’, many felt that prevention was impossible. My view is that the prevention needed needs to evolve.

Asserting that prevention is futile feels like throwing the baby out with the bathwater. Prevention isn’t impossible, it is simply that prevention covers a lot more than signature-based anti-virus. Prevention should encompass strong user awareness, patching, robust system configuration, and endpoint baselining.

In my recent times, we have seen detection and response take centre-stage; if you cannot stop something (prevention) we sure as hell want to detect its presence asap and make sure that we can minimise business disruption containment and eradication. An eminently sensible assertion but another which is shrouded in confusion across the industry.

Many seem to obsess on detection (in the detection and response concert); an approach which is understandable, given that security teams want to know what is happening across your environment in an expedient fashion. For me, the real power of a solution comes from being able to respond. EDR (Endpoint Detection & Response) tooling is at its most effective when it doesn’t just highlight that something is untoward, but when it provides a mechanism for an expedient, reliable response – that might be quarantining an endpoint, applying a registry key change, updating software.

Essentially, the controls for prevention are often very similar to those for response, you’re just applying them at a different phase of the cybersecurity lifecycle!

Control Industrialisation

Companies also need to consider how a tool will operate in their environment. The term ‘best of breed’ is espoused by many in our industry, almost suggesting that tools can be qualified for suitability in an abstract fashion. A particular technical control may have the highest number of bells, whistles or widgets but if you don’t have the people trained to operate a particular tool, its efficacy is significantly reduced.

It is important to think about cybersecurity tools outside of the project that deploys them. Often project teams squirrel away, making decision criteria based on the number of features a tool may have, without appreciating how that feature will materially reduce risk or improve operational efficiency in their environment.

Consider how your various tools will work together to deliver value – most companies I work with strive for ‘platform architecture’: the deployment of a small number of strategic offerings which offer multiple capabilities from a single control point, producing normalised output which can be consumed by various stakeholders inside and outside of the cybersecurity function.

A final (very important) point focuses on environmental visibility. It is absolutely imperative that any cybersecurity tooling is deployed to all systems which store, process and/or are adjacent to sensitive data assets. Having a best-of-breed tool deployed to 65% of the estate is not materially lowering risk and, in my experience, produces a false sense of (cyber) security. Work on the basics before moving onto esoteric use cases. I start most engagements with CISOs asking the following:

  • How many assets do you have?
  • Where do your assets reside (on-premise, off-premise)
  • What software is running on your endpoints?
  • Which users have access to systems?
  • Can you provide a prioritized list of assets?

Simple questions, but rarely are the answers as easily obtained.

In closing, start to think of the cybersecurity risk equation in all dealings with technology in your company. A threat actor without a vulnerability to exploit is severely hampered in their ability to cause business impact. A plethora of vulnerabilities with no foreseeable method of exploitation (application of compensating controls, lack of exploit code, etc) is less likely to be seen as a priority remediation activity.

Apply pragmatism and consistency to your cybersecurity program.

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: Cyber Security, Featured