Skip to content

Artwork: Cecilio Ruiz

How InfoSec pros keep open source safe—and how you can help

Security pros talk about what keeps them up at night—and what they’re doing about it.

// February 25, 2021

The ReadME Project amplifies the voices of the open source community: the maintainers, developers, and teams whose contributions move the world forward every day.

Very little modern software is written by a single programmer or even a single company. Sure, there are plenty of solo-developers out there. But individuals and teams don’t typically write entire applications from scratch. According to Gartner over 95% of the IT enterprises across the globe use open source software (OSS) for their mission-critical IT workloads, whether they are aware of it or not.

Relying on open source software saves lots of time and effort. By relying on existing components, developers can avoid reinventing the wheel, whether that wheel is an authentication system, a database connector, or a machine learning algorithm. But importing a third-party component can mean importing all of its bugs and security vulnerabilities—and all the bugs and security vulnerabilities of that component’s dependencies as well.

About 59% of active repositories with supported package ecosystems received security warnings from Dependabot, which scans dependencies for known security vulnerabilities, according to The 2020 State of the Octoverse report.

The good news is that explicitly malicious code is relatively rare in open source. Out of 521 security advisories analyzed by GitHub, according to the Octoverse report, only 17% were the result of malicious attacks, resulting in only 0.2% of security alerts. The rest were the result of seemingly honest mistakes. Information security professionals say the benefits of using open source code outweigh the drawbacks. “Open source is a double-edged sword,” says Assaf Dahan, senior director of threat research at the infosec company Cybereason. “But ultimately it's better to be transparent.” And besides, proprietary supply chains can introduce vulnerabilities too. All code is written by people, and people make mistakes, regardless of whether they’re working on open source or proprietary code.

But even innocent mistakes can be costly if they’re not corrected in time. A single bug in a commonly used open source module could make thousands upon thousands of systems vulnerable to attackers.

In theory, open source should provide a security advantage. Because anyone can inspect open source code, anyone can discover—and fix—security issues. With open source code, you can even fix security issues yourself instead of waiting for a vendor. But, in practice, this process can be infuriatingly slow. According to the Octoverse report, though security issues tend to be patched within about four weeks of discovery, it often takes years to discover vulnerabilities in the first place.

Man sitting at desk with computer screen.

“The number of vulnerabilities ‘in the wild’ outpaces the speed at which the security community can patch or even identify them,” says Jennifer Fernick, global head of research at NCC Group, and a governing board member of The Linux Foundation’s Open Source Security Foundation. “And each day, the world contains more lines of source code than it ever has before.” In other words, security, as it’s practiced today, doesn’t scale.

Fortunately, even though it’s impossible to eliminate security flaws entirely, there are steps that open source developers can take to make their code more secure—and rewarding careers to be had making software safer for everyone.

Challenging the Unknown

Most infosec pros agree that supply chain threats are among the scariest problems in software security today. Jonathan Leitschuh, a security software engineer at Gradle, worries that the problem could be even worse than we realize.

For example, he’s particularly concerned about the Java build and packaging ecosystem. Other language ecosystems have been attacked by bad actors. In 2018 malware was discovered in a popular npm module called event-stream after a new maintainer took over the project. The Java ecosystem is used heavily in both enterprise computing and for Android applications, so malware in its packaging ecosystem could have far reaching consequences. But Leitschuh says researchers have only ever found one case in the Java packaging ecosystem. “That means either it’s not happening, which seems unlikely, or we just haven't found it yet, which is pretty scary,” he says.

What just about everyone agrees on is that security needs to be a much bigger part of software development at a much earlier stage of the development process. The cost of fixing a security defect once it’s made it to production can be up to 60 times more expensive than during the development cycle, according to the NIST (the National Institute of Standards and Technologies), so it makes business sense to find issues as soon as possible.

But the leaders of too many organizations neglect security. “There's a ‘the less I know, the better’ mentality among too many executives,” says Charlotte Townsley, Director of Security Engineering at Natera. That’s both because security can be seen as a barrier to productivity and because leaders aren’t always aware of the threats they face. “They think they don’t have anything worth stealing,” she says.

“Just because” is often a good enough reason for a hacker to exploit a vulnerability. That means everyone needs to be on guard, even maintainers of fairly small open source projects. You never know how your work might be used in the future.

Risk can’t be eliminated entirely from any software system. But without understanding the security threats they face, organizations can’t make informed decisions about how much risk they’re willing to take. Instead of thinking of security as a nuisance and afterthought, open source developers need to think about security as a feature baked in from the very beginning of a project. Fernick recommends doing threat modeling before a single line of code is even written, and even thinking about security when deciding what development stack to use for a new endeavor.

Meanwhile, established and new projects alike can take better advantage of automated tools for static analysis and fuzzing by integrating them into their CI/CD pipeline.

“I think we have yet to see the true potential of techniques for finding vulnerabilities at scale,” Fernick says. “Large-scale fuzzing projects, vulnerability discovery query languages such as GitHub’s CodeQL, innovations in program analysis, applications of machine learning to identifying examples of particular bug classes, and recent research in automated exploit generation (AEG) have yet to, in my opinion, become fully realized, and are likely to shift the security landscape.”

Indeed, automation can make it much faster to find and fix vulnerabilities. According to the State of the Octoverse report, when Dependabot can automatically generate pull requests to patch a particular vulnerability, those issues are resolved an average of 13 days sooner than those without automatically generated fixes.

Person viewed from behind sitting in front of computer screen.

How and Why to Get Into Infosec

Automation is only one part of the puzzle. The open source community needs more infosec professionals. The good news is that it’s probably easier to get started in security research— the work of finding vulnerabilities in the wild before they can be exploited by bad actors—than you might think. Open source developers already have many of the skills they need to begin finding vulnerabilities. And open source is a great place to start your search once you’ve learned the basics.

“Understanding the basics of computer science helps, but I’ve met bug bounty hunters who are self-taught,” Leitschuh says.

Dahan agrees. “I think the key factor for a good researcher is not how well you code or how much malware you analyze,” he says. “It’s insatiable curiosity.”

Others recommend tipping your toe into the security pool by reading papers or watching talks at infosec conferences like USENIX Security, DEF CON, and Black Hat, listening to podcasts like Risky Business, and, of course, reading books. But there are so many resources out there it can feel overwhelming. “There are many different paths into security research, and many different paths once you’ve arrived,” Fernick says. “‘Security’ is not one universal skillset - there are many subfields within security, and no one is an expert at them all.”

It’s best to explore the field and focus on the things that interest you the most. “Don’t chase something you’re not interested in,” Leitschuh says.

When you’re ready to move beyond theory, “capture the flag” competitions are a great way to learn. During these digital wargame events, each team has a set of vulnerable services that they must patch before other teams exploit them. Meanwhile, your team will work to exploit the vulnerable services of other teams. You can find many events listed on the CTFtime site.

And don’t be afraid to jump in and start looking for vulnerabilities in open source code. Once you know what to look for, there’s nothing stopping you from finding them and reporting them to maintainers. “Think about how many vulnerabilities are out there and how few people are looking for them in open source code,” Leitschuh says.

GitHub principal security researcher Bas Alberts likens security research to QA. “I refer to it as ‘creative debugging,’” he says. “You’re methodically finding and documenting bugs.”

That said, combing through open source repos for vulnerabilities isn’t a particularly lucrative pursuit. But it’s a good way to learn more about security research so that you can participate in more lucrative bug bounty programs from companies like Tesla or begin a career in infosec. And open source security can be rewarding in non-monetary ways. “Contributing to open source makes me feel like I’m part of something bigger than myself,” Leitschuh says. “And there’s a high in finding a vulnerability that no one else knows about.”

Indeed, most infosec pros we spoke with say they’re motivated by two things. One is making the world safer. “Users deserve to be able to trust the software they use,” Townsley says. “I feel like an advocate for users and that motivates me and makes this super-fascinating work.”

The other is an obsessive interest in puzzle solving. “The idea of turning undefined behavior into an unintended feature of the software has always fascinated me,” Alberts says. “Turning software A into software B, through the exploitation ofvulnerability, feels like magic. Like casting spells. Poof, now your webserver is a command shell. Abracadabra.”

About The
ReadME Project

Coding is usually seen as a solitary activity, but it’s actually the world’s largest community effort led by open source maintainers, contributors, and teams. These unsung heroes put in long hours to build software, fix issues, field questions, and manage communities.

The ReadME Project is part of GitHub’s ongoing effort to amplify the voices of the developer community. It’s an evolving space to engage with the community and explore the stories, challenges, technology, and culture that surround the world of open source.

Follow us:

Nominate a developer

Nominate inspiring developers and projects you think we should feature in The ReadME Project.

Support the community

Recognize developers working behind the scenes and help open source projects get the resources they need.

Thank you! for subscribing