Programmer sitting at desk, drinking coffee, using computer with coding on screen showing open source security and Log4j vulnerability

Open Source Security in the Wake of the Log4j Vulnerability

Open Source Security in the Wake of the Log4j Vulnerability

Tim Mackey, principal security strategist, Synopsys Cybersecurity Research Center

While it might be tempting to view a major vulnerability as an indication of open source somehow being deficient, the reality is far from that. Open source software is not more or less secure than commercial software, and in reality, most commercial software either includes or runs on open source technologies.

The detection of vulnerabilities in open source isn’t a problem, but detection of software defects representing a weakness that could be exploited is an important topic. This distinction is important as vulnerabilities might not represent flaws in code, but instead flaws in deployment configuration or changes in hardware.

I’d also like to note that open source and closed source software have equal potential for security issues, but with open source it’s possible for anyone to identify those issues. And since it’s possible for anyone to identify issues, the question really is one of how many people are actually attempting to identify issues in open source and how diligent those efforts are.

Part of the problem is a sentiment that has consumers or users of open source projects behaving as if they expect the open source project to behave like a commercial software vendor. If you look at the issues list of any reasonably popular open source project on GitHub, you’ll see feature requests and comments about when certain problems might be resolved.

After all, the modern open source movement was founded on the principle that if you didn’t like the way the code was working, then you were free to modify it and address any perceived gaps in functionality that were important to you or your team. Feature requests in GitHub issues and complaints about serviceability have an implicit expectation that a product manager is on the receiving end of those requests and that they will be added to a roadmap and eventually be released – all for free.

And yet, open source functions through the altruism of contributors. In recent years we’ve heard core contributors for popular open source projects express frustration about the profits made by large businesses from the use of their software.

While it’s easy to relate to someone putting their energy into a project only to have a third party profit from the efforts, the reality is that if that third party is profiting from the efforts of an open source development team, then they should be contributing to its future success. If they don’t then they run not only the risk that the code in question might change in ways they didn’t expect, but also that when security issues are identified and resolved that they might have delays in applying those fixes.

At the end of the day, if a business isn’t taking the time to engage with teams creating the software that powers their business, then it’s likely they don’t know where all the software powering their business originates and can’t reliably patch it.

Interestingly, at the same time, open source software is so highly trusted that in some organizations, it’s freely downloaded and used as is, and without significant security reviews. That is to say, rarely do businesses perform the same level of security review on software downloaded from the internet that they perform for software they create.


If you doubt this statement, ask yourself who in your organization reviewed docker or libxml or even the open source database your applications depend on for security issues. Even if such reviews did occur, I’m willing to bet that the review was done once and doesn’t repeat with each new update. And that’s also a reflection of the trust placed in those open source efforts.

Which brings us to current events. What we are seeing with the Log4J response from the Apache Log4J team is exactly what we’d expect to see—a team that is taking the software they produce seriously and being responsive to the needs of their install base.

Considering that they are volunteers, such a response is indicative of the pride of ownership we often see within open source communities. In reality, an incident like Log4J is likely to improve open source development as a whole, much in the same way that Heartbleed improved development practices of both open and closed source development teams.

As I’ve heard the question asked time and time again, “should there be a commercial replacement to protect from Log4j and security implications from similar scenarios in the future?”, I’d say that this is a very common thought pattern. And it’s one that misunderstands how software development really works.

For a commercial replacement of any component to exist, there must be an available market for it. In the case of Log4J, the component logs message data to a log file. There is nothing sexy about it, and there are many other ways of logging data than just Log4J. That means there really isn’t much of a commercial software market for a replacement.

A more realistic conversation is around the identification and mitigation of vulnerabilities which requires us to define some roles up-front. Most people expect that their software suppliers, that is to say the people who produce the software they depend upon, to test that software. The outcome of that testing is a set of findings highlighting the weaknesses in the software the supplier produces.

In an ideal world, each of those weaknesses would be resolved prior to the software shipping. In the real world, some of those weaknesses will be fixed, some will be marked as “no plan to fix” and some will optimistically be fixed in a future release. What the list of weaknesses are, and which ones were fixed isn’t something a supplier typically divulges. No one tool can find all weakness, and some only work if you have the source code while others require a running application.

Enter software composition analysis (commonly referred to as SCA). These tools look at either the source code for an application, or the executable or libraries that define the application, and attempts to determine which open source libraries were used to create that application. The listing of those libraries is known as a Software Bill of Materials (SBOM).

Assuming the SCA software does its job properly, then a governance policy can be created that maps the National Vulnerability Database (NVD) data to the SBOM and you know what to patch. Except that there is still that latent NVD data to account for. Some of the more advanced SCA tools solve that problem by creating advisories that proactively alert when there is an NVD entry pending but where the details of that NVD entry are augmented by the SCA vendor. Some of the most advanced tools also invest in testing or validating which versions of the software are impacted by the vulnerability disclosure.

It’s at that point when you can define your run book for how to address the changes in risk from a new security disclosure. After all, its rather hard to patch something you didn’t know you had—and you never know when a new piece of software was created with a vulnerable component.