Figure 1-6: A possible supply chain for Bob’s doll house
What are the potential security (safety) issues with this situation? The glue provided in this kit could be poisonous, or the ink used to decorate the pieces could be toxic. The dollhouse could be manufactured in a facility that also processes nuts, which could cross-contaminate the boxes, which could cause allergic reactions in some children. Incorrect parts could be included, such as a sharp component, which would not be appropriate for a young child. All of these situations are likely to be unintentional on the part of the factory.
When creating software, we also use a supply chain: the frameworks we use to write our code, the libraries we call in order to write to the screen, do advanced math calculations, or draw a button, the application programming interfaces (APIs) we call to perform actions on behalf of our applications, etc. Worse still, each one of these pieces usually depends on other pieces of software, and all of them are potentially maintained by different groups, companies, and/or people. Modern applications are typically made up of 20–40 percent original code3 (what you and your teammates wrote), with the rest being made up of these third-party components, often referred to as “dependencies.” When you plug dependencies into your applications, you are accepting the risks of the code they contain that your application uses. For instance, if you add something to process images into your application rather than writing your own, but it has a serious security flaw in it, your application now has a serious security flaw in it, too.
This is not to suggest that you could write every single line of code yourself; that would not only be extremely inefficient, but you may still make errors that result in security problems. One way to reduce the risk, though, is to use fewer dependencies and to vet carefully the ones that you do decide to include in your software. Many tools on the market (some are even free) can verify if there are any known security issues with your dependencies. These tools should be used not only every time you push new code to production, but your code repository should also be scanned regularly as well.
SUPPLY CHAIN ATTACK EXAMPLE
The open source Node.js module called event-stream was passed on to a new maintainer in 2018 who added malicious code into it, waited until millions of people had downloaded it via NPM (the package manager for Node.JS), and then used this vulnerability to steal bitcoins from Copay wallets, which used the event-stream library in their wallet software.4
Another defense tactic against using an insecure software supply chain is using frameworks and other third-party components made by known companies or recognized and well-respected open source groups, just as a chef would only use the finest ingredients in the food they make. You can (and should) take care when choosing which components make it into the final cut of your products.
There have been a handful of publicly exposed supply chain attacks in recent years, where malicious actors injected vulnerabilities into software libraries, firmware (low-level software that is a part of hardware), and even into hardware itself. This threat is real and taking precautions against it will serve any developer well.
Security by Obscurity
The concept of security by obscurity means that if something is hidden it will be “more secure,” as potential attackers will not notice it. The most common implementation of this is software companies that hide their source code, rather than putting it open on the internet (this is used as a means to protect their intellectual property and as a security measure). Some go as far as obfuscating their code, changing it such that it is much more difficult or impossible to understand if someone is attempting to reverse engineer your product.
NOTE Obfuscation is making something hard to understand or read. A common tactic is encoding all of the source code in ASCII, Base64, or Hex, but that’s quite easy to see for professional reverse engineers. Some companies will double or triple encode their code. Another tactic is to XOR the code (an assembler command) or create their own encoding schema and add it programmatically. There are also products on the market that can perform more advanced obfuscation.
Another example of “security by obscurity” is having a wireless router suppress the SSID/Wi-Fi name (meaning if you want to connect you need to know the name) or deploying a web server without a domain name hoping no one will find it. There are tools to get around this, but it reduces your risk of people attacking your wireless router or web server.
The other side of this is “security by being open,” the argument that if you write open source software there will be more eyes on it and therefore those eyes will find security vulnerabilities and report them. In practice this is rarely true; security researchers rarely review open source code and report bugs for free. When security flaws are reported to open source projects they don’t necessarily fix them, and if vulnerabilities are found, the finder may decide to sell them on the black market (to criminals, to their own government, to foreign governments, etc.) instead of reporting them to the owner of the code repository in a secure manner.
Although security by obscurity is hardly an excellent defense when used on its own, it is certainly helpful as one layer of a “defense in depth” security strategy.
Attack Surface Reduction
Every part of your software can be attacked; each feature, input, page, or button. The smaller your application, the smaller the attack surface. If you have four pages with 10 functions versus 20 pages and 100 functions, that’s a much smaller attack surface. Every part of your app that could be potentially exposed to an attacker is considered attack surface.
Attack surface reduction means removing anything from your application that is unrequired. For instance, a feature that is not fully implemented but you have the button grayed out, would be an ideal place to start for a malicious actor because it’s not fully tested or hardened yet. Instead, you should remove this code before publishing it to production and wait until it’s finished to publish it. Even if it’s hidden, that’s not enough; reduce your attack surface by removing that part of your code.
TIP Legacy software often has very large amounts of functionality that is not used. Removing features that are not in use is an excellent way to reduce your attack surface.
If you recall from earlier in the chapter, Alice and Bob both have medical implants, a device to measure insulin for Alice and a pacemaker for Bob. Both of their devices are “smart,” meaning they can connect to them via their smart phones. Alice’s device works over Bluetooth and Bob’s works over Wi-Fi. One way for them to reduce the attack surface of their medical devices would have been to not have gotten smart devices in the first place. However, it’s too late for that in this example. Instead, Alice could disable her insulin measuring device’s Bluetooth “discoverable” setting, and Bob could hide the SSID of his pacemaker, rather than broadcasting it.
Hard Coding
Hard coding means programming values into the code, rather than getting the values organically (from the user, database, an API, etc.). For example, if you have created a calculator application and the user enters 4 + 4, presses Enter, and then the screen shows 8, you will likely assume the calculator works. However, if you enter 5 + 5 and press Enter but the screen still shows 8, you may have a situation of hard coding.
Why is hard coding a potential security issue? The reason is twofold: you cannot trust the output of the application, and the values that have been hard coded are often of a sensitive nature (passwords, API keys,