• United States



CSO Senior Writer

New free software signing service aims to strengthen open-source ecosystem

News Analysis
Mar 09, 20215 mins
Application SecurityDevSecOpsOpen Source

The Linux Foundation's sigstore code-signing software, developed with Google, Red Hat and Purdue University, will help prevent attacks on the software supply chain.

Check mark certificate in a binary tunnel / standards / quality control / certification / certifi
Credit: Tampatra / Getty Images

The Linux Foundation has launched a free service that software developers can use to digitally sign their releases and other software artifacts. The project aims to strengthen the security and auditability of the open-source software supply chain, which has faced an unprecedented number of attacks in recent years.

The code behind the new service, called sigstore, was developed in partnership with Google, Red Hat and Purdue University, and will be maintained by the community going forward. All signatures and signing events will be stored in a tamper-resistant public log that can be monitored to discover potential abuse.

How does sigstore work?

Sigstore uses the OpenID authentication protocol to tie certificates to identities. This means a developer can use their email address or account with an existing OpenID identity provider to sign their software.

This is different from traditional code signing that requires obtaining a certificate from a certificate authority (CA) that’s trusted by the maintainers of a particular software ecosystem, for example Microsoft or Apple. Obtaining a traditional code signing certificate requires going through special procedures that include identity verification or joining a developers program.

The sigstore signing client generates a short-lived ephemeral key pair and contacts the sigstore PKI (public-key infrastructure), which will be run by the Linux Foundation. The PKI service checks for a successful OpenID connect grant and issues a certificate based on the key pair that will be used to sign the software. The signing event is logged in the public log and then the keys can be discarded.

This is another difference to existing code signing because each signing event generates a new pair of keys and certificate. Ultimately, the goal is to have a public proof that a particular identity signed one file at a particular time. It’s up to the community to then build tools that use this information to create policies and enforcement mechanisms.

“It’s just based on normal X.509 certificate authorities, so people can add their own root CA, they can get rid of ours if they don’t want to trust it, they can add their own intermediaries, that kind of thing,” Dan Lorenc, a member of Google’s Open Source Security Team and project contributor, tells CSO.

Developers can use the public PKI service and transparency log or they can deploy and run their own internal signing system for their organization. The code for the logging service, dubbed Rekor, and for the root certificate authority, dubbed Fulcio, are open source and available on GitHub.

Why sign software releases?

Software code signing in general is used to provide guarantees about software provenance, providing evidence that a piece of code originated with a particular developer or organization the user trusts. Application whitelisting solutions for example use this information to enforce the user’s policies about what software and from whom is about to run on a particular system.

Such policies can be extended to package managers as well. Any modern software is built using third-party open-source components, which account for most of their code base. Because of this there have been attacks against open-source package repositories such as npm, PyPi or Ruby Gems. One attack technique revealed recently is called dependency confusion and relies on tricking package managers into installing a rogue variant of a particular local package by publishing a package with the same name but a higher version in a public repository.

Security policies built around software digital signatures can help prevent attacks like that, as well as attacks where the download or update server used by a software developer is compromised and their legitimate packages are replaced with malicious ones, or man-in-the-middle attacks against software update mechanisms.

There are other attacks, like the compromise of a developer’s machine or software building infrastructure and the injection of malicious code during the earlier stages of software development, like in the recent SolarWinds attack that impacted thousands of organizations. Code signing would not have necessarily prevented that attack because signing of a software release is one of the last steps before distribution and would occur after the code injection. However, a transparency log like the one that’s part of sigstore could provide valuable information to investigators of an incident or can even lead to early detection of a compromise.

According to Luke Hinds, security engineering lead at Red Hat, the log can be used to build monitoring tools similar to how the data breach notification service works, where a user can input their email address and be notified if it shows up in any of the public breaches that have been indexed. Developers could use such a tool to be notified every time their email address shows up in the sigstore log. If such an event occurs when they know they haven’t been active, that’s immediately a red flag that their account or system might have been compromised and someone is signing software on their behalf.

“The whole thing with the transparency log is not that it has the ability to block the attacks, but it gives you insight into these attacks that you currently just do not have,” Hinds tells CSO. “It provides transparency around the software supply chain.”

Researchers from Purdue University are working on a monitor prototype that will use the log, but over time the project’s maintainers hope the open-source community and private companies from the security space will build tools around the sigstore service. For example, development organizations could deploy the system and create granular security controls.

“We’re not building a policy engine here per se, but we’re building the tools and primitives that you can use to build one of those policy engines,” Lorenc says. “You can have 12 developers and say seven of those 12 need to sign this artifact for it to be good. You can imagine all sorts of scenarios like that.”