Penetration Testing vs. Vulnerability Scanning
To release reasonably secure products, vendors must integrate software security processes throughout all stages of the software development lifecycle. That would include product architecture and design; implementation and verification; deployment and monitoring in the field; and back again to design to address the changing threat landscape, market needs, and product issues.
In this blog post, we will focus on the verification phase of the security process, which is meant to ensure that security features have been implemented correctly. The process includes finding known weaknesses and vulnerabilities in the product along with relevant exploits; identifying security gaps; and gaining an overall picture of the product’s security profile. This needs to be done for the finished product as a whole, as well as for its individual components, whether they were developed entirely in-house, built with some open source code, or obtained from third party suppliers.
The same responsibilities apply whether you are the vendor that introduced the product to the market, an integrator of another party’s product, or an OEM/service provider using an off-the-shelf product under your brand. All of the parties above can be liable for security issues that are found in the product software. The security verification process can assist by assessing the product’s security standing and possible risks, and identifying issues that need to be fixed. For product purchasers, it can provide tangible proof to substantiate (or refute) vendor-provided security information.
However, there are different ways to achieve these goals. Traditional approaches involve internal quality assurance during the development and verification stages; penetration testing by independent external organizations; and external certification. Meanwhile, newer approaches focus on automated testing and vulnerability scanning. Each of these methods has its pros and cons, and a combination of some or all of them could be necessary to address all relevant issues.
To understand what is appropriate for your specific needs, we will examine each approach in detail.
Dedicating Quality Assurance for Security Functions
Quality Assurance (QA), an established stage of the development process, is typically performed by an internal team. The QA staff may be part of the development team, or it may be a separate team, possibly under separate management, which gives it a degree of independence. The QA team’s structure can affect its approach, how much it’s influenced by input from developers, and what tests they run in practice.
A good QA team takes an adversarial approach to testing, trying to find ways to break the product code and make it fail (negative tests) — very similar to the approach taken by attackers and pen-testers. More commonly, QA teams test whether product code performs the required functionality as expected (positive tests).
For example, when testing a software update mechanism positive tests check the codes’ robustness and ability to apply valid updates. Negative tests include checking for invalid update contents, incorrect signatures, or invalid certificate chains. These negative cases are more likely to turn up in an attack scenario.
The more QA teams become overloaded with work, the more they focus on positive tests, which are necessary to launch the product. This usually means that they tend to sacrifice negative tests, which are required to verify that the product is secure.
To properly perform the security function, QA teams must have dedicated resources and develop sufficient security specialization. At the very least, other security professionals in the organization would have to get involved regularly to instruct the QA team and collaborate on the testing plan. However, this reduces QA’s usual (and necessary!) focus on functional testing.
For these reasons, few organizations commit the QA resources required to ensure that they are releasing secure connected products. Instead, to establish the security standing of a product, most organizations opt for external penetration-testing.
Performing Penetration Testing for Deep Analysis
Penetration testing is security verification that is performed by an external team of specialists with an offensive approach. Instead of validating product functionality, penetration testing focuses only on finding security vulnerabilities and weaknesses.
Pen-testers use either the white-box or black-box methodology, which have different levels of exposure to the internal documentation and even to the product source code. In the white-box approach, testers have access to internal information, similar to an internal QA team. In the black-box approach, all they receive is the live product and any publicly available documentation — the information that a real-world attacker would have access to.
Pen-testers then set up a testing environment which, at the minimum, includes the product, and may include an entire system instance, including cloud or server accounts. This enables the testers to put the product through various onboarding and update flows, and test invalid inputs submitted to the cloud without risking the vendor’s production deployment.
Pen-testers’ knowledge and skills lean towards the offensive side, which helps to simulate a real-world attacker’s attitude. To produce a full vulnerability assessment, they examine the product’s external properties, such as the network interface and all communications passing through it, as well as its internal properties, such as a device’s firmware image contents. They try to compromise the product, starting with milder attacks, such as denial of service and data exposure; intensifying their attempts with more intrusive ways of gaining access and hijacking control; and going all the way to permanent modification of the software logic, as well as ways to corrupt data and/or logic in the cloud.
Penetration testing can have major benefits over internal testing, because of testers’ specialized skill set and organizational independence. Although a good test report comprehensively covers the product’s security issues, in practice it can fall short. Because most penetration testing is performed with a black-box approach, testers often focus on the product’s externally exposed components, such as web applications and remote login interfaces, at the expense of vulnerable internal features. In addition, testers are incentivized to find the most impressive vulnerabilities in a limited time, so their findings tend towards “low-hanging fruit” focused mostly on easily achievable attacks. Deeply analyzing the product’s security architecture may not be in their interest.
Another downside is that penetration testing is highly subjective and largely depends on pen-testers’ experience. Two teams will produce different reports based on variables such as their strengths and their tools.
Effective pen testing teams use automated tools, starting with things like nmap for port scanning and ending with Metasploit or Detectify . Automated tools simplify the initial OSINT process, map the attack surface, find points of entry for attackers, and so on. A vulnerability scanning tool can help pen-testers identify security vulnerabilities to list in their assessment report, including known vulnerabilities in both third-party libraries and open-source code. These can point them to promising areas for more thorough investigations or help them gain a foothold in the product’s code which they can then use for further attacks. More advanced tools will turn up more sophisticated results such as deeper architectural issues. We’ll take a closer look at automated tools below.
Although penetration testing reports can be used as a stamp of independent certification which may help convince customers of the product’s security standing, it is usually better to achieve certification using a dedicated process. We will review that option in the next section.
Receiving Independent Security and Compliance Certification
Some markets require certification by an independent laboratory or compliance to a standard. This is most obvious where safety is a large concern. For example, in the automotive, medical and industrial sectors, compliance to different certifications is required by law. Most markets still don’t define clear requirements for cybersecurity compliance when it comes to connected products, such as embedded or IoT devices, but regulators are increasingly introducing legislation and labeling schemes to that effect. In other verticals, certification may not be mandatory, but can still confer a distinct competitive advantage especially as customer demand for security features is continuously increasing. For these reasons, vendors often submit their products for independent certification.
Cybersecurity certification programs are usually defined around a standard document. Relevant standards run the gamut from closed and proprietary ones, typically those developed by the certifying organization itself (such as the UL 2900 family of standards), to free and open ones (such as the NIST CMVP documents, ENISA or IoTSF standards), and everything in between (for example, the documents for ISA/IEC 62443 which are available following email registration).
For each standard, the vendor typically must go over its content and implement all procedural and technical requirements. The vendor is then expected to submit detailed evidence, documenting how the product complies to the standard, or where it deviates from it. The certification process itself varies widely. On one end of the spectrum you have self-certification which only requires the vendors to fill out a questionnaire and publish the answers so that prospective customers have access to it. On the other end there is compliance which involves testing by an independent organization, with a laboratory performing its own exhaustive product tests and reviewing the documentation submitted by the vendors.
Either way, much of the burden of proof remains with the vendors since they must prepare the product and the accompanying documentation. This typically requires considerable development efforts, as well as payments to the certifying body and to any laboratories or consultants involved. Even after certification has been completed, additional costs may be incurred as the certification may need to be maintained or renewed when the product is updated or when the product line is extended.
The subject matter of cybersecurity standards varies widely. Some standards are dedicated to the vendor’s documentation and the security process itself (some even mandate using penetration testing or automated tools to find security vulnerabilities and known exploits.) Some focus on secure coding techniques. Others are even more technical in that they address product architecture and configuration, or at least include some technical chapters.
Most standards keep their technical requirements at a relatively high level, with only a few providing the actual technical instructions necessary to meet them (for some positive examples, see CIS Benchmarks, DoD STIGs or the AGL Security Blueprint). This makes it far more difficult for vendors to even estimate their initial level of compliance before they begin the certification process, and significantly increases the costs required to complete it. This is where automated tools can help reduce the efforts and costs involved in the process.
Automating Vulnerability Scanning for Objective Results
There are many types of automated tools, each covering different aspects of product security verification. Some use the dynamic approach, where a running application or device is scanned over the network in order to diagnose its web server and communication security. Others use the static approach where the product’s source code or binary image are scanned.
Dynamic tools require a running product, whereas static tools are more flexible since they only require a software file. Another difference is that dynamic tools are limited to the product’s external behavior, whereas static tools also examine the software internals. Static tools can cover secure coding practices, find known security vulnerabilities and exploits, identify potential zero-day vulnerabilities, and even highlight various configuration and architectural issues from the lowest application layers (bootloader and the operating system internals) to the upper ones.
A well-developed tool can run hundreds or even thousands of individual scans or tests, and static testing has the advantage of producing in minutes the kind of results that would take days for manual penetration testers. The entire testing process can be applied automatically and be seamlessly integrated into a CI/CD flow, potentially scanning every product and every version, which is impossible without automation.
A good automated vulnerability scanning tool must be based on rich research and development, including significant contributions by cybersecurity experts and penetration testers. This enables accurate detection of vulnerabilities as well as intelligent prioritization based on in-depth understanding of the threats.
Another benefit of automating security testing is that results are not subjective. The same tool always returns objective results regarding a full product or software component, making automated tools very useful for external certification.
And automated tools can help in certification if they include support for external standards. They can analyze software and output a gap report with respect to a given standard; this provides an understanding of the time and effort required to achieve compliance. This can be done in minutes or hours, while the same process, when done manually, can take weeks of documentation and analysis.
Security Verification is a Must-Have
To summarize, security verification is a necessity, but there are different ways to achieve it. Manual methods can yield excellent results but require a dedicated team, as well as time and effort, and sometimes can produce skewed results, especially if incentives are misaligned. Vendors with broad product portfolios and time-to-market pressures will have to leverage automated testing in order to validate their products’ security at scale.
Independent certification has its own downsides, primarily high costs and lengthy work processes, which are made even worse by the need to do it all over again for each version. On the other hand, independent certification is sometimes necessary for entry to market and is probably the most convincing mark of security quality that vendors can use for competitive advantage. Automated tools can aid with certification as well.
As new cyber-attacks involving software vulnerabilities continue to make headlines, and regulators get more and more involved in mandating security norms, security verification is becoming a pressing need. Because automated tools can provide on-demand, detailed security verification for a wide range of products, their share of security verification tasks will continue to rise.
Learn how JFrog Xray can help you automate security as part of your software development lifecycle.