Security Requirements and Architecture
Understanding security considerations in the solution lifecycle
Any changes to an ICT system can introduce risks that leave businesses open to cyber-attack. Luckily there are tried and tested ways that organisations can manage these risks, which are considered in this article, starting with security requirements analysis and moving on to security architecture.
Information security plays a vital role in systems and software development. It doesn’t matter whether the business is deploying a new application or upgrading its infrastructure, security risks can be introduced that leave the organisation exposed to cyber threats. Furthermore, as organisations adopt new ways of working, shifting to consumption-based computing and as-a-service models, these are also introducing risks that have not necessarily been properly managed.
Development projects should begin with an analysis stage, where the solution requirements are documented for the development team to build a solution against. This requirement set usually includes user requirements, technical (system) requirements, and non-functional security requirements.
Whomever is accountable for information security should ensure security requirements are incorporated into the development programme. It’s also important that assurance requirements are captured and agreed with all business stakeholders, so that functional and/or user-facing security controls are accepted by the users when they go into production. For example, the customer database solution is being updated and all new systems need to include two-factor authentication to help prevent privilege escalation. It’s the system owner’s responsibility to know that users will be required to carry a token or implement one-time password (OTP) based authenticator on their phones, and ensure they are prepared for any change in their ways of working.
Invariably, well-considered security requirements significantly reduce the amount of rework needed later in the project. It’s always more difficult to add security at the end of a project than it is to build it in from the start. Yet the process of eliciting security requirements at the beginning of a project is often missed, invariably inflating costs to remediate security issues later or seeing vulnerable systems pushed into production.
Use Case Modelling
Security teams sometimes use formal methods of requirements elicitation, known as Use Case models – based on Universal Modelling Language (UML). A Use Case depicts a desired user behaviour or process flow within a system, which can then be used to express a specific element of a system under certain conditions. Individual tasks are usually documented as individual functions – in the real world this might be tasks such as opening a door or putting a pie in the oven. Developers and solution designers can build systems that deliver on the requirements of all of these use cases, which in turn work together to deliver the overall system.
The release process typically for getting solutions into production involves rigorous security testing against the requirements. Even the introduction of a well-respected commercial product, such as Microsoft Office, requires some degree of security acceptance testing prior to release, so any operational risks can be managed. Even configuration requirements should be checked, where the security team may recommend disabling a feature that introduces cyber risk. To illustrate this, look at ASD’s Essential Eight; specifically, look at the control called User Application Hardening. ASD recommends web browsers are configured to block Adobe Flash, ads and Java scripts, since these all pose a threat. Disabling unnecessary features in Microsoft Office, for example, such as OLE (object linking and embedded) and macros, helps shore up your defences and remove some of the ‘attack surface’. There are almost certainly security considerations in even the simplest of end-user applications, so it’s important that you don’t miss this step out when considering changes to your systems.
Security plays a major role in the solution development lifecycle. It is important that you maintain a good working relationship with the architects and developers such that input and guidance on security is accepted. The development team needs to value security requirements alongside other kinds of requirements and not just see it as a detractor. Furthermore, good security requirements ensure the system can be properly tested for weaknesses. Security testing plans need to be carefully constructed so that all aspects of security assurance are covered in the release process.
Most organisations use a change management process as the gatekeeper for new things entering production. Even the simplest of system changes should start with a requestor proposing the change, followed by authorised approvals prior to roll out. The impact, along with the benefits, risks, costs, and backout-plan, is all considered during the change management process, and the accountable lead for security should remain a stakeholder throughout. Any security risks introduced by this change should be presented to the security team, along with mitigation plans and a testing strategy, to incorporate into the release plan. If some risks are to be accepted, then they need to be recorded in a risk register and monitored in case they get worse over time.
Once all stakeholders are content that the change is ready to progress, any criteria of acceptance are recorded, and testing begins. At this stage specific requirements are proven as being met and any exceptions can be reviewed by the change approval team. Finally, the change is approved and rolled out.
What is Security Testing?
Security testing is the process whereby you uncover weakness in systems that may be exploitable and ensures the system meets all its non-functional assurance requirements, such as producing audit information, authenticating users properly, encrypting data, etc. It includes activities such as compliance audits, vulnerability analysis and penetration testing.
Most businesses require assurance testing that a new system or application doesn’t introduce unnecessary risk into the production environment or the product they are shipping. Testing is a critical step in the solution development lifecycle and without it there would be many more defects in the systems we use. But the quality and rigor of testing used by many organisations isn’t as stringent as is required to find software and system bugs that are the primary target of hackers.
The ways we test security are varied, depending on whether they are intrinsic in the development process or whether they are only carried out in post-production. A holistic approach sees security testing integrated into each stage of the development lifecycle, where unit tests carried out during development also include security reviews of code and individual system components or functions.
Unit testing helps security teams detect issues early in the development process, while vulnerability assessments and penetration testing typically factor into the end stages of the development process to find issues in the final product or solution.
Testing frequency is sometimes defined in your organisation’s security policy or solution development guidelines, whereas the method and style of testing is the detail that is reserved for technical security architects or secure coding experts to specify depending on the nature of the project.
You might decide that systems should be checked for vulnerabilities once a year, while all projects over a certain size or complexity should have code reviews at various stages of the development process and a penetration test prior to production. Clarity on test schedules means everyone understands the value of security testing and it becomes an inherent step in the project plan, so it’s costed and doesn’t get missed.
The Difference Between Vulnerability Assessments and Penetration Tests
As the name suggests, a vulnerability assessment is a useful security assurance test that uncovers security weaknesses in your systems or applications.
Vulnerability assessments provide important feedback to developers and inform the security team on cyber risks that need to be registered and managed. Findings are identified as statements of how easy they are to exploit, potential business impacts, and mitigation strategies; a missing server patch, for example, is a vulnerability, and applying the patch is the simplest form of mitigation. Once you know about a vulnerability, you can at least manage the risk that it poses, no matter what you decide to do about it in the short term.
A penetration test takes an offensive viewpoint against the target and puts it under simulated attack conditions. Penetration testing is a highly skilled job and testers use offensive hacking methods to test for weaknesses in your defences. This involves some reconnaissance, where vulnerabilities are detected, but it takes the test to the next stage of active exploitation.
Penetration testing is different to a vulnerability assessment as it determines which vulnerabilities can be exploited. The tester can report more accurate levels of risk to stakeholders, allowing them to choose how to mitigate it (or not).
For example, a vulnerability assessment might detect a server not having a critical patch, reporting it as a high risk. A penetration test of the same system might find the same vulnerability incredibly hard or impossible to exploit in the context of your business environment – this means the risk rating lowers from very high to very low since it’s including context. The reality is that you would still fix the problem, but now you can address it in a controlled way in a timeframe that suits you rather than immediately.
Another term that is often misused in the security industry is the concept of security architecture. Architects are often mistaken as the guys who make decisions without understanding the detail, which is also how security architects are often viewed. However, there are multiple levels of security architecture, similar to TOGAF’s model for Enterprise Architecture, which span from the business architecture viewpoint, all the way down to an operational view. Security architecture typically focuses on three things:
- When are security controls used;
- How are security controls implemented;
- Where are security controls positioned for maximum effect.
The core decision making framework for architectural decisions is based on risk management, like most other aspects of security, and the when, how and where elements of any architecture drive the derivation of security requirements we discussed at the beginning of this article. Most organisations come up with design patterns, which are standard ways of delivering specific aspects of an enterprise. For example, for remote access, the security architecture standard may depict the use of two-factor authentication and encryption for all transport protocols. The architect would not select a product or necessarily configure the operational VPN device, but the architect would instead help derive the test and assurance plan to ensure all new remote access solutions are proven as meeting those standards.
Security architecture is often seen as an unnecessary overhead that organisations can do without, but without standards or governance, there is no way to get consistent outcomes across large or disparate organisations.
How can we help?
CXO Security has the expertise to help your business on its development journey. We can work with you on defining security requirements, threat modelling, security architecture design patterns, and security test plans that are appropriate to your business’s function and industry. We also offer a range of security testing and assurance services to validate what you’re building as it’s being built, just prior to go-live, and for post go-live security checks. Remember that consideration for security early on in your project significantly reduces risks of vulnerabilities that could lead to costly rework and remediation. Engage us sooner and we’ll work with you to formulate the most appropriate statement of work.
 We say security requirements are non-functional as they are often specified as qualitative outcomes, such as maintaining confidentiality of personally identifiable information, assuring the integrity of control data or guaranteeing the availability of an interface. However, technical security requirements can also be expressed as functional requirements, similar to systems requirements, so don’t focus all your effort on qualitative requirements, since specific technical requirements are much easier to prove (test) in terms of being production-ready.
 A security representative should assess test results and specify if additional security assurance tests are needed, such as a pen test.