General Security Advice for Developers
As a pentester, development teams or decisionmakers often ask us for advice on making their applications more secure. While penetration test reports generally include some specific advice about a reported vulnerability, the questions I’m referring to are more high-level and geared toward avoiding security problems in the first place. In some organizations, decisionmakers expect penetration testing teams to hand-hold developers in terms of implementation guidance or training. While this low-level implementation-specific guidance isn’t the purview of a penetration tester, it’s understandable that development teams and upper management would look to us for general guidance and advice.
Because of this, I thought it would be helpful to write a high-level blog post giving some advice that developers might use when making implementation decisions. Having been a developer myself in the past, I understand the confusion that security recommendations can cause and the frustration experienced by the conflicting goals of security and project deadlines.
What's the objective?
One major area of confusion between developers and penetration testers or security auditors is a difference in perspective. A developer’s goal, in general, is to take a set of project specifications and flesh them out into a working application. They are given tasks and subtasks to complete based on a timeline and their focus is on making an application work in a specific way. They typically have a “user story” which describes how the application is expected to behave from a user’s point of view. This is the viewpoint that a developer takes when building the application. Their focus is on exactly that: building.
A penetration tester on the other hand, looks at an application in an adversarial perspective. They ask themselves: how can I make this application function in a way which is unintended? And once I have, how can I exploit that behavior to gain access to things I shouldn’t or perform actions I shouldn’t be permitted to? Their mindset is a malicious one; they represent the “chaotic evil” of the world, in contrast to the pure and hopeful perspective of the developer.
To understand security problems, developers need to peer into that abyss of unintended behavior and malicious intent. Instead of verifying that the application works as intended, they need to ask themselves: “but what if it didn’t?”. As a penetration tester, I’ve heard countless times from development teams: “but no one would ever do that…”. I argue as a representative of malicious action: “yes, they absolutely would.” In order to protect yourself and your application from malicious actors, you have to assume the worst will happen. If you assume the worst, and protect against it, then you are protected regardless of a user’s intent.
The approach is a simple one and works at every level, from the macro/planning phases to the micro, including specific method implementation. Ask yourself: “if someone bad were to use this application/method, what is the worst possible thing they could do with it?” Once you’ve answered that question, assume it will happen and deal with it appropriately.
I’ll give a few examples. First, let’s look at a high level, the planning stage. Let’s say you are planning on building a new application which serves documents to users. The documents are stored in a database system and contain sensitive data. We ask ourselves: what is the worst possible thing a malicious actor could do with this? Several things come to mind:
1. A malicious user could access or modify documents which they should not have access to. This would result in exposure of sensitive data or potential compromise of other users.
2. A malicious actor could potentially get access to the database storing the documents and either steal or modify them, or potentially use that as a stepping stone to get further into the network
3. A malicious actor may be able to compromise or destroy the application itself.
From these high-level conclusions, it should then be possible to drill down into how a malicious actor may be able to do these things and protect the application at a more micro-level.
Now that we’ve examined the issue from a high-level, let’s look at another example from a micro or implementation-specific perspective. Let’s assume that a developer is tasked with building a method to take in a string value and append formatting into it. We can ask ourselves: what would be the worst thing a malicious actor would be able to do? In this case, likely to inject something bad into the string input to the method. We can then examine what sort of injections might cause malicious string output in our specific technical stack, such as cross-site scripting, and implement sanitization checks against them.
This advice may seem obvious, but a perspective shift is essential for developers to understand the mindset behind those who seek to exploit applications. Merely asking yourselves, “what *could* a bad actor do here, no matter how unlikely” is enough to cover most potentially vulnerable scenarios and drastically increase the security of an application. It’s not unlike developing unit tests: look at the method or piece of functionality and determine where something might go wrong, then build up assurances that it won’t happen. Incidentally, these sorts of scenarios can be included in actual unit tests as well.
While deceptively simple, this approach can help developers looking to improve their security in a straightforward and accessible manner.