Especially for companies that operate larger infrastructures, a pentest can often provide more insights than is typically assumed. We show you how to interpret pentest results correctly and get the maximum benefit from them.
One of the main reasons is a wrong perspective on the results of a test. Typical assumptions are:
Misconception 1: A pentest finds all vulnerabilities that are present on the target
A first important realization is that penetration tests can never detect all vulnerabilities on a target system. This is for the following reasons: Firstly, the test is limited in time, and secondly, not all configuration parameters are known about the system for most tests.
Conclusion: A pentest alone cannot be used to make a target application more secure. A pentest report without critical findings does not mean that the application can contain absolutely no vulnerabilities.
Consequence: Use the full range of testing options for application audits: Code reviews, peer reviews, Secure Software Development training. The earlier vulnerabilities are discovered, the higher the profit. Early code reviews focusing on weaknesses, code complexity and „bad smells“ can uncover errors in the design, data model or programmer understanding. A pentest usually only takes place at a release status of the application where major changes are no longer possible.
If vulnerabilities are identified in a pentest, it should always be evaluated whether these errors may also be present in other application components. Particularly in the case of input validation vulnerabilities, it is often not possible to identify all vulnerable parameters in a test. It should also be analyzed whether the faulty design may have been used in other applications.
Misconception 2: A pentest makes a statement about how secure the system is against (future) attacks
A pentest is always just a snapshot of currently known vulnerabilities and the target system in its configuration and version at the time of the test. Just because a current report shows „low“ as the overall risk does not mean that a new vulnerability will not be published in the future that compromises the entire system.
Pentests should therefore not be seen as a one-off measure, but rather as a method for regularly checking an application or IT system for known vulnerabilities.
Misconception 3: Risk assessment equals priority
We often see that pentest results are processed further without a more detailed discussion of the risk. The risk assessment of the pentesters is seen as „set in stone“. Here we would like to point out that a discussion of the identified vulnerabilities with your IT security team can lead to a meaningful weighting or prioritization of the results. Depending on the threat model you have developed (e.g. in a risk/impact analysis as part of ISO 27001 certification), there may be vulnerabilities whose elimination should be prioritized differently than the external assessment of the pentest report.
As Pentest Factory, we are happy to support this discussion (e.g. in a joint meeting) in order to create an overall picture of the target system and risk in its environment.
Another aspect is the risk assessment system itself. When using the standard CVSS system (without environment metrics), the overall risk is calculated from a formula that leaves us as testers little room for context-dependent upgrading or downgrading of risks. For example, you can only choose between „High Attack Complexity“ and „Low Attack Complexity“ for the „Attack Complexity“ metric. Accordingly, attacks of medium complexity cannot be mapped here. This is similar for the other metrics in the CVSS system. This means, for example, that we may classify a finding with medium criticality as „high risk“ because the CVSS formula calculates this.
In general, it makes sense to discuss the individual results and assigned risks in the team.
Misconception 4: Fixing vulnerabilities solves the problem
The result of a pentest is a final report. This lists identified weaknesses and provides specific recommendations for remedying the findings.
At first glance, it appears that the main task after the test is completed is to eliminate these weaknesses.
However, as a pentest service provider, we often see that remedying vulnerabilities is the only activity resulting from a test result. For this reason, it is all the more important to understand that the real value of the pentest lies in the identification of faulty processes. It is worth asking about every weak point „Why did this vulnerability occur? How can we correct the process behind it?„
This is the only way to ensure that, for example, a missing patch management process or inadequate asset management is corrected and that software deployments are not running again with missing updates after a month.
Since we very often see that a root cause analysis is omitted after the pentest has been completed, we would like to show a second example in which an understanding of the process that went wrong can bring significant added value in terms of safety:
- In a pentest report, it is determined that a file with sensitive environment variables was saved in the web directory of the application server. The file with the name „.env“ was already communicated to the customer during the pentest and the customer immediately removed the file. If the customer leaves his remedial measures at this step, he ignores a complete root cause analysis and possibly other existing vulnerabilities.
- Let’s ask ourselves the question: Why did the .env file make it into the web directory? After analyzing the development repository (e.g. GIT), we discover that a developer created the file 2 months before release and stored sensitive environment variables in it. This includes the AWS secret key and the passwords of the administrator account. The developer forgot to exclude the file from the repository indexing. This is achieved by including the file in the „.gitignore“ list.
-
- How can we rectify this error in the future?
- Finding 1: Possible cause in developer misunderstanding: „Developers do not understand the risk of hardcoded passwords and keys“.
–> Awareness seminar with developers on the topic of secure software development–> Monthly session on „Secrets in the source code“
- Finding 1: Possible cause in developer misunderstanding: „Developers do not understand the risk of hardcoded passwords and keys“.
- How can we rectify this error in the future?
-
-
- Finding 2: The fault was not noticed for 2 months and was only discovered in the pentest.
–> Options for automatic detection of secrets: „Static source code analysis“, „Automated analysis of commits“, „Automated scans of the source code repository“–> Customization of the CI/CD pipeline to automatically stop sensitive commits
- Finding 2: The fault was not noticed for 2 months and was only discovered in the pentest.
-
-
-
- Insight 3: Poor management of sensitive keys
–> Introduce new central tool for secrets management – This also improves the enforcement of password policies, password rotation
- Insight 3: Poor management of sensitive keys
-
-
- Have we made this mistake several times in the past?
- Insight 1: Developers have not just programmed one application. We find that the same error has also been made in a neighboring application.
–> Pentest result can be transferred to similar systems and processes
- Insight 1: Developers have not just programmed one application. We find that the same error has also been made in a neighboring application.
- Have we made this mistake several times in the past?
-
-
- Insight 2: The version management tool contains a history of all changes ever made
–> Analysis of the entire repository for sensitive commits
- Insight 2: The version management tool contains a history of all changes ever made
-
Misconception 5: High risks in the final report = "The product is bad"
Just because a „critical“ vulnerability is identified does not mean that the development or the product is „bad“. Products that provide a particularly large number of functions have a particularly large number of potential attack surfaces. The best examples are well-known products such as browsers or operating systems that release monthly security patches.
As experts in the field of cybersecurity for many years, we see that the biggest problems arise from a defensive mindset and an inadequate response to risks. Specifically, the following fatal decisions are made:
- Wrong decision 1: „The less we disclose about the vulnerability, the less negative attention we generate.“
–> Maximum transparency is the only correct response, especially after a vulnerability becomes known. What exactly is the weak point? Where does this occur? What is the worst-case scenario? Maximum transparency is the only way to determine the exact cause and ensure that all parties involved understand the risk sufficiently to initiate countermeasures.
The vulnerability should never be seen as the fault of an individual or the company, but as an opportunity to react. The response to a vulnerability (not the vulnerability itself) determines to a large extent what damage can actually be done.
- Wrong decision 2: Persons responsible for the weakness are sought.
–> This leads to a fatal error culture in the company, where errors are no longer openly communicated and corrected.
–> Errors are interpreted as failure. Learning effects and joint growth do not materialize.
- Wrong decision 3: In order to make the identification of a critical vulnerability less likely in advance, a very limited scope is deliberately selected for the pentest. Here are a few examples:
- Only one specific user front end is considered „in-scope“. Administrative components must not be tested.
- A user environment is provided for the pentest that contains no or insufficient test data, which means that essential application functions cannot be tested.
- No data may be sent in the productive environment“. The pentest can therefore not effectively test input processing.
- The use of intrusion prevention systems or web application firewalls is not specified. The pentest is hindered by these systems. A result no longer adequately reflects the risk of the application itself.
–> These or other restrictions lead to an incorrect risk image of the target system. Instead of recognizing vulnerabilities as early as possible, the complexity and risk potential of the application grows step by step. If a vulnerability is detected late, it becomes more time-consuming and therefore more costly to close it.
Conclusion
As a pentest service provider, it is important to us that our customers get the maximum benefit from a pentest. For this reason, we often hold team discussions to identify trends and make the best possible recommendations. This article is the result of these discussions over the last few years and aims to open up new perspectives on the pentest results.
Do you have any questions or need support with pentesting, secure software development or improving internal processes? Please use the contact form, we will be happy to assist you.