This is a working draft. This document may be modified, replaced, or discarded at any time.

For the latest release candidate or approved version, please use the version selector.

Version 1.0 is the current version. See the Version 1.0 documentation.

SLSA Specification

SLSA is a specification for describing and incrementally improving supply chain security, established by industry consensus. It is organized into a series of levels that describe increasing security guarantees.

This is the Working Draft of what the next version of the SLSA specification might be. It defines several SLSA levels and tracks, as well as recommended attestation formats, including provenance.

Understanding SLSA

These sections provide an overview of SLSA, how it helps protect against common supply chain attacks, and common use cases. If you’re new to SLSA or supply chain security, start here.

Section Description
What’s new The changes brought by this Working Draft.
About SLSA An introductory guide to SLSA
Supply chain threats An introduction to supply chain threats
Use cases Use cases
Guiding principles Use cases
FAQ Questions and more information
Future directions Additions and changes being considered for future SLSA versions

Core specification

These sections describe SLSA’s security levels and requirements for each track. If you want to achieve SLSA a particular level, these are the requirements you’ll need to meet.

Section Description
Terminology Terminology and model used by SLSA
Security levels Overview of SLSA’s tracks and levels, intended for all audiences
Producing artifacts Detailed technical requirements for producing software artifacts, intended for platform implementers
Distributing provenance Detailed technical requirements for distributing provenance, intended for platform implementers and software distributors
Verifying artifacts Guidance for verifying software artifacts and their SLSA provenance, intended for platform implementers and software consumers
Verifying build platforms Guidelines for securing SLSA Build L3+ builders, intended for platform implementers
Integrity levels for attested build environments Overview of SLSA’s Attested Build Environment track, intended for all audiences
Threats & mitigations Detailed information about specific supply chain attacks and how SLSA helps
Securing Source Code Overview of the Source track

Attestation formats

These sections include the concrete schemas for SLSA attestations. The Provenance and VSA formats are recommended, but not required by the specification.

Section Description
General model General attestation mode
Provenance Suggested provenance format and explanation
Verification Summary Suggested VSA format and explanation

How to SLSA

These instructions tell you how to apply the core SLSA specification to use SLSA in your specific situation.

Section Description
For developers How to apply SLSA requirements to your build
For organizations How to apply SLSA to an organization
For infrastructure providers How to implement SLSA in source, build, and package platforms

What's new

This document describes the major changes brought by this Working Draft relative to the prior release, v1.0.

Summary of changes

  • Clarify that attestation format schema are informative and the specification texts (SLSA and in-toto attestation) are the canonical source of definitions.
  • Add procedure for verifying VSAs.
  • Add verifier metadata to VSA format.
  • It is now recommended that the digest field of ResourceDescriptor is set in a Verification Summary Attestation’s (VSA) policy object.
  • Further refine the threat model.
  • Add draft of SLSA Source Track.

About SLSA

This section is an introduction to SLSA and its concepts. If you’re new to SLSA, start here!

What is SLSA?

Supply-chain Levels for Software Artifacts, or SLSA (“salsa”), is a set of incrementally adoptable guidelines for supply chain security, established by industry consensus. The specification set by SLSA is useful for both software producers and consumers: producers can follow SLSA’s guidelines to make their software supply chain more secure, and consumers can use SLSA to make decisions about whether to trust a software package.

SLSA offers:

  • A common vocabulary to talk about software supply chain security
  • A way to secure your incoming supply chain by evaluating the trustworthiness of the artifacts you consume
  • An actionable checklist to improve your own software’s security
  • A way to measure your efforts toward compliance with the Secure Software Development Framework (SSDF)

Why SLSA is needed

High profile attacks like those against SolarWinds or Codecov have exposed the kind of supply chain integrity weaknesses that may go unnoticed, yet quickly become very public, disruptive, and costly in today’s environment when exploited. They’ve also shown that there are inherent risks not just in code itself, but at multiple points in the complex process of getting that code into software systems—that is, in the software supply chain. Since these attacks are on the rise and show no sign of decreasing, a universal framework for hardening the software supply chain is needed, as affirmed by the U.S. Executive Order on Improving the Nation’s Cybersecurity.

Security techniques for vulnerability detection and analysis of source code are essential, but are not enough on their own. Even after fuzzing or vulnerability scanning is completed, changes to code can happen, whether unintentionally or from insider threats or compromised accounts. Risk for code modification exists at each link in a typical software supply chain, from source to build through packaging and distribution. Any weaknesses in the supply chain undermine confidence in whether the code that you run is actually the code that you scanned.

SLSA is designed to support automation that tracks code handling from source to binary, protecting against tampering regardless of the complexity of the software supply chain. As a result, SLSA increases trust that the analysis and review performed on source code can be assumed to still apply to the binary consumed after the build and distribution process.

SLSA in layperson’s terms

There has been a lot of discussion about the need for “ingredient labels” for software—a “software bill of materials” (SBOM) that tells users what is in their software. Building off this analogy, SLSA can be thought of as all the food safety handling guidelines that make an ingredient list credible. From standards for clean factory environments so contaminants aren’t introduced in packaging plants, to the requirement for tamper-proof seals on lids that ensure nobody changes the contents of items sitting on grocery store shelves, the entire food safety framework ensures that consumers can trust that the ingredient list matches what’s actually in the package they buy.

Likewise, the SLSA framework provides this trust with guidelines and tamper-resistant evidence for securing each step of the software production process. That means you know not only that nothing unexpected was added to the software product, but also that the ingredient label itself wasn’t tampered with and accurately reflects the software contents. In this way, SLSA helps protect against the risk of:

  • Code modification (by adding a tamper-evident “seal” to code after source control)
  • Uploaded artifacts that were not built by the expected CI/CD platform (by marking artifacts with a factory “stamp” that shows which build platform created it)
  • Threats against the build platform (by providing “manufacturing facility” best practices for build platform services)

For more exploration of this analogy, see the blog post SLSA + SBOM: Accelerating SBOM success with the help of SLSA.

Who is SLSA for?

In short: everyone involved in producing and consuming software, or providing infrastructure for software.

Software producers, such as an open source project, a software vendor, or a team writing first-party code for use within the same company. SLSA gives you protection against tampering along the supply chain to your consumers, both reducing insider risk and increasing confidence that the software you produce reaches your consumers as you intended.

Software consumers, such as a development team using open source packages, a government agency using vendored software, or a CISO judging organizational risk. SLSA gives you a way to judge the security practices of the software you rely on and be sure that what you receive is what you expected.

Infrastructure providers, who provide infrastructure such as an ecosystem package manager, build platform, or CI/CD platform. As the bridge between the producers and consumers, your adoption of SLSA enables a secure software supply chain between them.

How SLSA works

We talk about SLSA in terms of tracks and levels. A SLSA track focuses on a particular aspect of a supply chain, such as the Build Track. SLSA v1.0 consists of only a single track (Build), but future versions of SLSA will add tracks that cover other parts of the software supply chain, such as how source code is managed.

Within each track, ascending levels indicate increasingly hardened security practices. Higher levels provide better guarantees against supply chain threats, but come at higher implementation costs. Lower SLSA levels are designed to be easier to adopt, but with only modest security guarantees. SLSA 0 is sometimes used to refer to software that doesn’t yet meet any SLSA level. Currently, the SLSA Build Track encompasses Levels 1 through 3, but we envision higher levels to be possible in future revisions.

The combination of tracks and levels offers an easy way to discuss whether software meets a specific set of requirements. By referring to an artifact as meeting SLSA Build Level 3, for example, you’re indicating in one phrase that the software artifact was built following a set of security practices that industry leaders agree protect against particular supply chain compromises.

What SLSA doesn’t cover

SLSA is only one part of a thorough approach to supply chain security. There are several areas outside SLSA’s current framework that are nevertheless important to consider together with SLSA such as:

  • Code quality: SLSA does not tell you whether the developers writing the source code followed secure coding practices.
  • Producer trust: SLSA does not address organizations that intentionally produce malicious software, but it can reduce insider risks within an organization you trust. SLSA’s Build Track protects against tampering during or after the build, and future SLSA tracks intend to protect against unauthorized modifications of source code and dependencies.
  • Transitive trust for dependencies: the SLSA level of an artifact is independent of the level of its dependencies. You can use SLSA recursively to also judge an artifact’s dependencies on their own, but there is currently no single SLSA level that applies to both an artifact and its transitive dependencies together. For a more detailed explanation of why, see the FAQ.

Supply chain threats

Attacks can occur at every link in a typical software supply chain, and these kinds of attacks are increasingly public, disruptive, and costly in today’s environment.

This section is an introduction to possible attacks throughout the supply chain and how SLSA could help. For a more technical discussion, see Threats & mitigations.

Summary

Supply Chain Threats

Note that SLSA does not currently address all of the threats presented here. See Threats & mitigations for what is currently addressed and Terminology for an explanation of the supply chain model.

SLSA’s primary focus is supply chain integrity, with a secondary focus on availability. Integrity means protection against tampering or unauthorized modification at any stage of the software lifecycle. Within SLSA, we divide integrity into source integrity vs build integrity.

Source integrity: Ensure that source revisions contain only changes submitted by authorized contributors according to the process defined by the software producer and that source revisions are not modified as they pass between development stages.

Build integrity: Ensure that the package is built from the correct, unmodified sources and dependencies according to the build recipe defined by the software producer, and that artifacts are not modified as they pass between development stages.

Availability: Ensure that the package can continue to be built and maintained in the future, and that all code and change history is available for investigations and incident response.

Real-world examples

Many recent high-profile attacks were consequences of supply chain integrity vulnerabilities, and could have been prevented by SLSA’s framework. For example:

Threats from Known example How SLSA could help
A Producer SpySheriff: Software producer purports to offer anti-spyware software, but that software is actually malicious. SLSA does not directly address this threat but could make it easier to discover malicious behavior in open source software, by forcing it into the publicly available source code. For close source software SLSA does not provide any solutions for malicious producers.
B Authoring & reviewing SushiSwap: Contractor with repository access pushed a malicious commit redirecting cryptocurrency to themself. Two-person review could have caught the unauthorized change.
C Source code management PHP: Attacker compromised PHP's self-hosted git server and injected two malicious commits. A better-protected source code system would have been a much harder target for the attackers.
D External build parameters The Great Suspender: Attacker published software that was not built from the purported sources. A SLSA-compliant build server would have produced provenance identifying the actual sources used, allowing consumers to detect such tampering.
E Build process SolarWinds: Attacker compromised the build platform and installed an implant that injected malicious behavior during each build. Higher SLSA levels require stronger security controls for the build platform, making it more difficult to compromise and gain persistence.
F Artifact publication CodeCov: Attacker used leaked credentials to upload a malicious artifact to a GCS bucket, from which users download directly. Provenance of the artifact in the GCS bucket would have shown that the artifact was not built in the expected manner from the expected source repo.
G Distribution channel Attacks on Package Mirrors: Researcher ran mirrors for several popular package registries, which could have been used to serve malicious packages. Similar to above (F), provenance of the malicious artifacts would have shown that they were not built as expected or from the expected source repo.
H Package selection Browserify typosquatting: Attacker uploaded a malicious package with a similar name as the original. SLSA does not directly address this threat, but provenance linking back to source control can enable and enhance other solutions.
I Usage Default credentials: Attacker could leverage default credentials to access sensitive data. SLSA does not address this threat.
N/A Dependency threats (i.e. A-H, recursively) event-stream: Attacker added an innocuous dependency and then later updated the dependency to add malicious behavior. The update did not match the code submitted to GitHub (i.e. attack F). Applying SLSA recursively to all dependencies would prevent this particular vector, because the provenance would indicate that it either wasn't built from a proper builder or that the source did not come from GitHub.
Availability threat Known example How SLSA could help
N/A Dependency becomes unavailable Mimemagic: Producer intentionally removes package or version of package from repository with no warning. Network errors or service outages may also make packages unavailable temporarily. SLSA does not directly address this threat.

A SLSA level helps give consumers confidence that software has not been tampered with and can be securely traced back to source—something that is difficult, if not impossible, to do with most software today.

Use cases

SLSA protects against tampering during the software supply chain, but how? The answer depends on the use case in which SLSA is applied. Below describe the three main use cases for SLSA.

Applications of SLSA

First party

In its simplest form, SLSA can be used entirely within an organization to reduce risk from internal sources. This is the easiest case in which to apply SLSA because there is no need to transfer trust across organizational boundaries.

Example ways an organization might use SLSA internally:

  • A small company or team uses SLSA to ensure that the code being deployed to production in binary form is the same one that was tested and reviewed in source form.
  • A large company uses SLSA to require two person review for every production change, scalably across hundreds or thousands of employees/teams.
  • An open source project uses SLSA to ensure that compromised credentials cannot be abused to release an unofficial package to a package registry.

Case study: Google (Binary Authorization for Borg)

Open source

SLSA can also be used to reduce risk for consumers of open source software. The focus here is to map built packages back to their canonical sources and dependencies. In this way, consumers need only trust a small number of secure build platforms rather than the many thousands of developers with upload permissions across various packages.

Example ways an open source ecosystem might use SLSA to protect users:

  • At upload time, the package registry rejects the package if it was not built from the canonical source repository.
  • At download time, the packaging client rejects the package if it was not built by a trusted builder.

Case study: SUSE

Vendors

Finally, SLSA can be used to reduce risk for consumers of vendor provided software and services. Unlike open source, there is no canonical source repository to map to, so instead the focus is on trustworthiness of claims made by the vendor.

Example ways a consumer might use SLSA for vendor provided software:

  • Prefer vendors who make SLSA claims and back them up with credible evidence.
  • Require a vendor to implement SLSA as part of a contract.
  • Require a vendor to be SLSA certified from a trusted third-party auditor.

Motivating example

For a look at how SLSA might be applied to open source in the future, see the hypothetical curl example.

Guiding principles

This section is an introduction to the guiding principles behind SLSA’s design decisions.

Simple levels with clear outcomes

Use levels to communicate security state and to encourage a large population to improve its security stance over time. When necessary, split levels into separate tracks to recognize progress in unrelated security areas.

Reasoning: Levels simplify how to think about security by boiling a complex topic into an easy-to-understand number. It is clear that level N is better than level N-1, even to someone with passing familiarity. This provides a convenient way to describe current security state as well as a natural path to improvement.

Guidelines:

  • Define levels in terms of concrete security outcomes. Each level should have clear and meaningful security value, such as stopping a particular class of threats. Levels should represent security milestones, not just incremental progress. Give each level an easy-to-remember mnemonic, such as “Provenance exists”.

  • Balance level granularity. Too many levels makes SLSA hard to understand and remember; too few makes each level hard to achieve. Collapse levels until each step requires a non-trivial but manageable amount of work to implement. Separate levels if they require significant work from multiple distinct parties, such as infrastructure work plus user behavior changes, so long as the intermediate level still has some security value (prior bullet).

  • Use tracks sparingly. Additional tracks add extra complexity to SLSA, so a new track should be seen as a last resort. Each track should have a clear, distinct purpose with a crisply defined objective, such as trustworthy provenance for the Build track. As a rule of thumb, a new track may be warranted if it addresses threats unrelated to another track. Try to avoid tracks that sound confusingly similar in either name or objective.

Trust platforms, verify artifacts

Establish trust in a small number of platforms and systems—such as change management, build, and packaging platforms—and then automatically verify the many artifacts produced by those platforms.

Reasoning: Trusted computing bases are unavoidable—there’s no choice but to trust some platforms. Hardening and verifying platforms is difficult and expensive manual work, and each trusted platform expands the attack surface of the supply chain. Verifying that an artifact is produced by a trusted platform, though, is easy to automate.

To simultaneously scale and reduce attack surfaces, it is most efficient to trust a limited numbers of platforms and then automate verification of the artifacts produced by those platforms. The attack surface and work to establish trust does not scale with the number of artifacts produced, as happens when artifacts each use a different trusted platform.

Benefits: Allows SLSA to scale to entire ecosystems or organizations with a near-constant amount of central work.

Example

A security engineer analyzes the architecture and implementation of a build platform to ensure that it meets the SLSA Build Track requirements. Following the analysis, the public keys used by the build platform to sign provenance are “trusted” up to the given SLSA level. Downstream platforms verify the provenance signed by the public key to automatically determine that an artifact meets the SLSA level.

Corollary: Minimize the number of trusted platforms

A corollary to this principle is to minimize the size of the trusted computing base. Every platform we trust adds attack surface and increases the need for manual security analysis. Where possible:

  • Concentrate trust in shared infrastructure. For example, instead of each team within an organization maintaining their own build platform, use a shared build platform. Hardening work can be shared across all teams.
  • Remove the need to trust components. For example, use end-to-end signing to avoid the need to trust intermediate distribution platforms.

Trust code, not individuals

Securely trace all software back to source code rather than trust individuals who have write access to package registries.

Reasoning: Code is static and analyzable. People, on the other hand, are prone to mistakes, credential compromise, and sometimes malicious action.

Benefits: Removes the possibility for a trusted individual—or an attacker abusing compromised credentials—to tamper with source code after it has been committed.

Prefer attestations over inferences

Require explicit attestations about an artifact’s provenance; do not infer security properties from a platform’s configurations.

Reasoning: Theoretically, access control can be configured so that the only path from source to release is through the official channels: the CI/CD platform pulls only from the proper source, package registry allows access only to the CI/CD platform, and so on. We might infer that we can trust artifacts produced by these platforms based on the platform’s configuration.

In practice, though, these configurations are almost impossible to get right and keep right. There are often over-provisioning, confused deputy problems, or mistakes. Even if a platform is configured properly at one moment, it might not stay that way, and humans almost always end up getting in the access control lists.

Access control is still important, but SLSA goes further to provide defense in depth: it requires proof in the form of attestations that the package was built correctly.

Benefits: The attestation removes intermediate platforms from the trust base and ensures that individuals who are accidentally granted access do not have sufficient permission to tamper with the package.

Frequently asked questions

Q: Why is SLSA not transitive?

SLSA Build levels only cover the trustworthiness of a single build, with no requirements about the build levels of transitive dependencies. The reason for this is to make the problem tractable. If a SLSA Build level required dependencies to be the same level, then reaching a level would require starting at the very beginning of the supply chain and working forward. This is backwards, forcing us to work on the least risky component first and blocking any progress further downstream. By making each artifact’s SLSA rating independent from one another, it allows parallel progress and prioritization based on risk. (This is a lesson we learned when deploying other security controls at scale throughout Google.) We expect SLSA ratings to be composed to describe a supply chain’s overall security stance, as described in the case study vision.

Q: What about reproducible builds?

When talking about reproducible builds, there are two related but distinct concepts: “reproducible” and “verified reproducible.”

“Reproducible” means that repeating the build with the same inputs results in bit-for-bit identical output. This property provides many benefits, including easier debugging, more confident cherry-pick releases, better build caching and storage efficiency, and accurate dependency tracking.

“Verified reproducible” means using two or more independent build platforms to corroborate the provenance of a build. In this way, one can create an overall platform that is more trustworthy than any of the individual components. This is often suggested as a solution to supply chain integrity. Indeed, this is one option to secure build steps of a supply chain. When designed correctly, such a platform can satisfy all of the SLSA Build level requirements.

That said, verified reproducible builds are not a complete solution to supply chain integrity, nor are they practical in all cases:

  • Reproducible builds do not address source, dependency, or distribution threats.
  • Reproducers must truly be independent, lest they all be susceptible to the same attack. For example, if all rebuilders run the same pipeline software, and that software has a vulnerability that can be triggered by sending a build request, then an attacker can compromise all rebuilders, violating the assumption above.
  • Some builds cannot easily be made reproducible, as noted above.
  • Closed-source reproducible builds require the code owner to either grant source access to multiple independent rebuilders, which is unacceptable in many cases, or develop multiple, independent in-house rebuilders, which is likely prohibitively expensive.

Therefore, SLSA does not require verified reproducible builds directly. Instead, verified reproducible builds are one option for implementing the requirements.

For more on reproducibility, see Hermetic, Reproducible, or Verifiable?

Q: How does SLSA relate to in-toto?

in-toto is a framework to secure software supply chains hosted at the Cloud Native Computing Foundation. The in-toto specification provides a generalized workflow to secure different steps in a software supply chain. The SLSA specification recommends in-toto attestations as the vehicle to express Provenance and other attributes of software supply chains. Thus, in-toto can be thought of as the unopinionated layer to express information pertaining to a software supply chain, and SLSA as the opinionated layer specifying exactly what information must be captured in in-toto metadata to achieve the guarantees of a particular level.

in-toto’s official implementations written in Go, Java, and Rust include support for generating SLSA Provenance metadata. These APIs are used in other tools generating SLSA Provenance such as Sigstore’s cosign, the SLSA GitHub Generator, and the in-toto Jenkins plugin.

Q. What is the difference between a build platform, system, and service?

Build platform and build system have been used interchangeably in the past. With the v1.0 specification, however, there has been a unification around the term platform as indicated in the Terminology. The use of the word system still exists related to software and services within the build platform and to systems outside of a build platform like change management systems.

A build service is a hosted build platform that is often run on shared infrastructure instead of individuals’ machines and workstations. Its use has also been replaced outside of the requirements as it relates to the build platform.

Q: Is SLSA the same as TACOS?

No. Trusted Attestation and Compliance for Open Source (TACOS) is a framework authored by Tidelift. Per their website, TACOS is a framework “for assessing the development practices of open source projects against a set of secure development standards specified by the (US) NIST Secure Software Development Framework (SSDF) V1.1” which “vendors can use to provide self-attestation for the open source components they rely on.”

In contrast, SLSA is a community-developed framework—including adoptable guidelines for securing a software supply chain and mechanism to evaluate the trustworthiness of artifacts you consume—that is part of the Open Source Security Foundation (OpenSSF).

Q: How does SLSA and SLSA Provenance relate to SBOM?

Software Bill of Materials (SBOM) are a frequently recommended tool for increased software supply chain rigor. An SBOM is typically focused on understanding software in order to evaluate risk through known vulnerabilities and license compliance. These use-cases require fine-grained and timely data which can be refined to improve signal-to-noise ratio.

SLSA Provenance and the Build track are focused on trustworthiness of the build process. To improve trustworthiness, Provenance is generated in the build platform’s trusted control plane, which in practice results in it being coarse grained. For example, in Provenance metadata completeness of resolvedDependencies information is on a best-effort basis. Further, the ResourceDescriptor type does not require version and license information or even a URI to the dependency’s original location.

While they likely include similar data, SBOMs and SLSA Provenance operate at different levels of abstraction. The fine-grained data in an SBOM typically describes the components present in a produced artifact, whereas SLSA Provenance more coarsely describes parameters of a build which are external to the build platform.

The granularity and expressiveness of the two use-cases differs enough that current SBOM formats were deemed not a good fit for the requirements of the Build track. Yet SBOMs are a good practice and may form part of a future SLSA Vulnerabilities track. Further, SLSA Provenance can increase the trustworthiness of an SBOM by describing how the SBOM was created.

SLSA Provenance, the wider in-toto Attestation Framework in which the recommended format sits, and the various SBOM standards, are all rapidly evolving spaces. There is ongoing investigation into linking between the different formats and exploration of alignment on common models. This FAQ entry describes our understanding of the intersection efforts today. We do not know how things will evolve over the coming months and years, but we look forward to the collaboration and improved software supply chain security.

Q: How to SLSA with a self-hosted runner

Some CI systems allow producers to provide their own self-hosted runners as a build environment (e.g. GitHub Actions). While there are many valid reasons to leverage these, classifying the SLSA build level for the resulting artifact can be confusing.

Since the SLSA Build track describes increasing levels of trustworthiness and completeness in a package artifact’s provenance, interpretation of the specification hinges on the platform entities involved in the provenance generation. The SLSA build level requirements (secure key storage, isolation, etc.) should be imposed on the transitive closure of the systems which are responsible for informing the provenance generated.

Some common situations may include:

  • The platform generates the provenance and just calls a runner for individual items. In this situation, the provenance is only affected by the platform so there would be no requirements imposed on the runner.
  • The runner generates the provenance. In this situation, the orchestrating platform is irrelevant and all requirements are imposed on the runner.
  • The platform provides the runner with some credentials for generating the provenance or both the platform and the runner provide information for the provenance. Trust is shared between the platform and the runner so the requirements are imposed on both.

Additional requirements on the self-hosted runners may be added to Build levels greater than L3 when such levels get defined.

Future directions

The initial draft version (v0.1) of SLSA had a larger scope including protections against tampering with source code and a higher level of build integrity (Build L4). This section collects some early thoughts on how SLSA might evolve in future versions to re-introduce those notions and add other additional aspects of automatable supply chain security.

Build track

Build L4

A build L4 could include further hardening of the build platform and enabling corraboration of the provenance, for example by providing complete knowledge of the build inputs.

The initial draft version (v0.1) of SLSA defined a “SLSA 4” that included the following requirements, which may or may not be part of a future Build L4:

  • Pinned dependencies, which guarantee that each build runs on exactly the same set of inputs.
  • Hermetic builds, which guarantee that no extraneous dependencies are used.
  • All dependencies listed in the provenance, which enables downstream verifiers to recursively apply SLSA to dependencies.
  • Reproducible builds, which enable other build platforms to corroborate the provenance.

Source track

A Source track could provide protection against tampering of the source code prior to the build.

The initial draft version (v0.1) of SLSA included the following source requirements, which may or may not form the basis for a future Source track:

  • Strong authentication of author and reviewer identities, such as 2-factor authentication using a hardware security key, to resist account and credential compromise.
  • Retention of the source code to allow for after-the-fact inspection and future rebuilds.
  • Mandatory two-person review of all changes to the source to prevent a single compromised actor or account from introducing malicious changes.

Build Platform Operations track

A Build Platform Operations track could provide assurances around the hardening of build platforms as they are operated.

The initial draft version (v0.1) of SLSA included a subsection on common requirements that formed the foundation of the guidance for verifying build systems, which may or may not form the basis for a future Build Platform Operations track:

  • Controls for approval, logging, and auditing of all physical and remote access to platform infrastructure, cryptographic secrets, and privileged debugging interfaces.
  • Conformance to security best practices to minimize the risk of compromise.
  • Protection of cryptographic secrets used by the build platform.

Terminology

Before diving into the SLSA Levels, we need to establish a core set of terminology and models to describe what we’re protecting.

Software supply chain

TODO: Update the text to match the new diagram.

SLSA’s framework addresses every step of the software supply chain - the sequence of steps resulting in the creation of an artifact. We represent a supply chain as a directed acyclic graph of sources, builds, dependencies, and packages. One artifact’s supply chain is a combination of its dependencies’ supply chains plus its own sources and builds.

Software Supply Chain Model

Term Description Example
Artifact An immutable blob of data; primarily refers to software, but SLSA can be used for any artifact. A file, a git commit, a directory of files (serialized in some way), a container image, a firmware image.
Attestation An authenticated statement (metadata) about a software artifact or collection of software artifacts. A signed SLSA Provenance file.
Source Artifact that was directly authored or reviewed by persons, without modification. It is the beginning of the supply chain; we do not trace the provenance back any further. Git commit (source) hosted on GitHub (platform).
Build Process that transforms a set of input artifacts into a set of output artifacts. The inputs may be sources, dependencies, or ephemeral build outputs. .travis.yml (process) run by Travis CI (platform).
Package Artifact that is “published” for use by others. In the model, it is always the output of a build process, though that build process can be a no-op. Docker image (package) distributed on DockerHub (platform). A ZIP file containing source code is a package, not a source, because it is built from some other source, such as a git commit.
Dependency Artifact that is an input to a build process but that is not a source. In the model, it is always a package. Alpine package (package) distributed on Alpine Linux (platform).

Roles

Throughout the specification, you will see reference to the following roles that take part in the software supply chain. Note that in practice a role may be filled by more than one person or an organization. Similarly, a person or organization may act as more than one role in a particular software supply chain.

Role Description Examples
Producer A party who creates software and provides it to others. Producers are often also consumers. An open source project’s maintainers. A software vendor.
Verifier A party who inspect an artifact’s provenance to determine the artifact’s authenticity. A business’s software ingestion system. A programming language ecosystem’s package registry.
Consumer A party who uses software provided by a producer. The consumer may verify provenance for software they consume or delegate that responsibility to a separate verifier. A developer who uses open source software distributions. A business that uses a point of sale system.
Infrastructure provider A party who provides software or services to other roles. A package registry’s maintainers. A build platform’s maintainers.

Build model

Model Build

We model a build as running on a multi-tenant build platform, where each execution is independent.

  1. A tenant invokes the build by specifying external parameters through an interface, either directly or via some trigger. Usually, at least one of these external parameters is a reference to a dependency. (External parameters are literal values while dependencies are artifacts.)
  2. The build platform’s control plane interprets these external parameters, fetches an initial set of dependencies, initializes a build environment, and then starts the execution within that environment.
  3. The build then performs arbitrary steps, which might include fetching additional dependencies, and then produces one or more output artifacts. The steps within the build environment are under the tenant’s control. The build platform isolates build environments from one another to some degree (which is measured by the SLSA Build Level).
  4. Finally, for SLSA Build L2+, the control plane outputs provenance describing this whole process.

Notably, there is no formal notion of “source” in the build model, just external parameters and dependencies. Most build platforms have an explicit “source” artifact to build from, which is often a git repository; in the build model, the reference to this artifact is an external parameter while the artifact itself is a dependency.

For examples of how this model applies to real-world build platforms, see index of build types.

Primary Term Description
Platform System that allows tenants to run builds. Technically, it is the transitive closure of software and services that must be trusted to faithfully execute the build. It includes software, hardware, people, and organizations.
Admin A privileged user with administrative access to the platform, potentially allowing them to tamper with builds or the control plane.
Tenant An untrusted user that builds an artifact on the platform. The tenant defines the build steps and external parameters.
Control plane Build platform component that orchestrates each independent build execution and produces provenance. The control plane is managed by an admin and trusted to be outside the tenant’s control.
Build Process that converts input sources and dependencies into output artifacts, defined by the tenant and executed within a single build environment on a platform.
Steps The set of actions that comprise a build, defined by the tenant.
Build environment The independent execution context in which the build runs, initialized by the control plane. In the case of a distributed build, this is the collection of all such machines/containers/VMs that run steps.
Build caches An intermediate artifact storage managed by the platform that maps intermediate artifacts to their explicit inputs. A build may share build caches with any subsequent build running on the platform.
External parameters The set of top-level, independent inputs to the build, specified by a tenant and used by the control plane to initialize the build.
Dependencies Artifacts fetched during initialization or execution of the build process, such as configuration files, source artifacts, or build tools.
Outputs Collection of artifacts produced by the build.
Provenance Attestation (metadata) describing how the outputs were produced, including identification of the platform and external parameters.
Ambiguous terms to avoid
  • Build recipe: Could mean external parameters, but may include concrete steps of how to perform a build. To avoid implementation details, we don’t define this term, but always use “external parameters” which is the interface to a build platform. Similar terms are build configuration source and build definition.
  • Builder: Usually means build platform, but might be used for build environment, the user who invoked the build, or a build tool from dependencies. To avoid confusion, we always use “build platform”. The only exception is in the provenance, where builder is used as a more concise field name.

Build environment model

TODO: Add figure

The Build Environment (BuildEnv) track expands upon the build model by explicitily separating the build image and compute platform from the abstract build environment and build platform.

A typical build environment will go through the following lifecycle:

  1. Build image creation: A build image producer creates different build images through a dedicated build process. For the SLSA BuildEnv track, the build image producer outputs provenance describing this process.
  2. Build environment instantiation: The hosted build platform calls into the host interface to create a new instance of a build environment from a given build image. The build agent begins to wait for an incoming build dispatch. For the SLSA BuildEnv track, the host interface in the compute platform attests to the integrity of the environment’s initial state during its boot process.
  3. Build dispatch: When the tenant dispatches a new build, the hosted build platform assigns the build to a created build environment. For the SLSA BuildEnv track, the build platform attests to the binding between a build environment and build ID.
  4. Build execution: Finally, the build executor running within the environment executes the tenant’s build definition.

The BuildEnv track uses the following roles, components, and concepts:

Primary Term Description
Build ID An immutable identifier assigned uniquely to a specific execution of a tenant’s build. In practice, the build ID may be an identifier, such as a UUID, associated with the build execution.
Build image The template for a build environment, such as a VM or container image. Individual components of a build image include the root filesystem, pre-installed guest OS and packages, the build executor, and the build agent.
Build image producer The party that creates and distributes build images. In practice, the build image producer may be the hosted build platform or a third party in a bring-your-own (BYO) build image setting.
Build executor A platform-provided program dedicated to executing the tenant’s build definition, i.e., running the build, within the build environment. The build executor must be included in the build image’s measurement.
Build agent A program that interacts with the hosted build platform’s control plane from within a running build environment. The build agent must be included in the build image’s measurement.
Build dispatch The process of assigning a tenant’s build to a pre-deployed build environment on a hosted build platform.
Compute platform The compute system and infrastructure underlying a build platform, i.e., the host system (hypervisor and/or OS) and hardware. In practice, the compute platform and the build platform may be managed by the same or distinct organizations.
Host interface The component in the compute platform that the hosted build platform uses to request resources for deploying new build environments, i.e., the VMM/hypervisor or container orchestrator.
Boot process In the context of builds, the process of loading and executing the layers of firmware and/or software needed to start up a build environment on the host compute platform.
Measurement The cryptographic hash of some component or system state in the build environment, including software binaries, configuration, or initialized run-time data.
Quote (Virtual) hardware-signed data that contains one or more (virtual) hardware-generated measurements. Quotes may additionally include nonces for replay protection, firmware information, or other platform metadata.
Reference value A specific measurement used as the good known value for a given build environment component or state.

TODO: Disambiguate similar terms (e.g., image, build job, build runner)

Package model

Software is distributed in identifiable units called packages according to the rules and conventions of a package ecosystem. Examples of formal ecosystems include Python/PyPA, Debian/Apt, and OCI, while examples of informal ecosystems include links to files on a website or distribution of first-party software within a company.

Abstractly, a consumer locates software within an ecosystem by asking a package registry to resolve a mutable package name into an immutable package artifact.1 To publish a package artifact, the software producer asks the registry to update this mapping to resolve to the new artifact. The registry represents the entity or entities with the power to alter what artifacts are accepted by consumers for a given package name. For example, if consumers only accept packages signed by a particular public key, then it is access to that public key that serves as the registry.

The package name is the primary security boundary within a package ecosystem. Different package names represent materially different pieces of software—different owners, behaviors, security properties, and so on. Therefore, the package name is the primary unit being protected in SLSA. It is the primary identifier to which consumers attach expectations.

Term Description
Package An identifiable unit of software intended for distribution, ambiguously meaning either an “artifact” or a “package name”. Only use this term when the ambiguity is acceptable or desirable.
Package artifact A file or other immutable object that is intended for distribution.
Package ecosystem A set of rules and conventions governing how packages are distributed, including how clients resolve a package name into one or more specific artifacts.
Package manager client Client-side tooling to interact with a package ecosystem.
Package name

The primary identifier for a mutable collection of artifacts that all represent different versions of the same software. This is the primary identifier that consumers use to obtain the software.

A package name is specific to an ecosystem + registry, has a maintainer, is more general than a specific hash or version, and has a “correct” source location. A package ecosystem may group package names into some sort of hierarchy, such as the Group ID in Maven, though SLSA does not have a special term for this.

Package registry An entity responsible for mapping package names to artifacts within a packaging ecosystem. Most ecosystems support multiple registries, usually a single global registry and multiple private registries.
Publish [a package] Make an artifact available for use by registering it with the package registry. In technical terms, this means associating an artifact to a package name. This does not necessarily mean making the artifact fully public; an artifact may be published for only a subset of users, such as internal testing or a closed beta.
Ambiguous terms to avoid
  • Package repository: Could mean either package registry or package name, depending on the ecosystem. To avoid confusion, we always use “repository” exclusively to mean “source repository”, where there is no ambiguity.
  • Package manager (without “client”): Could mean either package ecosystem, package registry, or client-side tooling.

Mapping to real-world ecosystems

Most real-world ecosystems fit the package model above but use different terms. The table below attempts to document how various ecosystems map to the SLSA Package model. There are likely mistakes and omissions; corrections and additions are welcome!

Package ecosystem Package registry Package name Package artifact
Languages
Cargo (Rust) Registry Crate name Artifact
CPAN (Perl) Upload server Distribution Release (or Distribution)
Go Module proxy Module path Module
Maven (Java) Repository Group ID + Artifact ID Artifact
npm (JavaScript) Registry Package Name Package
NuGet (C#) Host Project Package
PyPA (Python) Index Project Name Distribution
Operating systems
Dpkg (e.g. Debian) ? Package name Package
Flatpak Repository Application Bundle
Homebrew (e.g. Mac) Repository (Tap) Package name (Formula) Binary package (Bottle)
Pacman (e.g. Arch) Repository Package name Package
RPM (e.g. Red Hat) Repository Package name Package
Nix (e.g. NixOS) Repository (e.g. Nixpkgs) or binary cache Derivation name Derivation or store object
Storage systems
GCS n/a Object name Object
OCI/Docker Registry Repository Object
Meta
deps.dev: System Packaging authority Package n/a
purl: type Namespace Name n/a

Notes:

  • Go uses a significantly different distribution model than other ecosystems. In go, the package name is a source repository URL. While clients can fetch directly from that URL—in which case there is no “package” or “registry”—they usually fetch a zip file from a module proxy. The module proxy acts as both a builder (by constructing the package artifact from source) and a registry (by mapping package name to package artifact). People trust the module proxy because builds are independently reproducible and a checksum database guarantees that all clients receive the same artifact for a given URL.

Verification model

Verification in SLSA is performed in two ways. Firstly, the build platform is certified to ensure conformance with the requirements at the level claimed by the build platform. This certification should happen on a recurring cadence with the outcomes published by the platform operator for their users to review and make informed decisions about which builders to trust.

Secondly, artifacts are verified to ensure they meet the producer defined expectations of where the package source code was retrieved from and on what build platform the package was built.

Verification Model

Term Description
Expectations A set of constraints on the package’s provenance metadata. The package producer sets expectations for a package, whether explicitly or implicitly.
Provenance verification Artifacts are verified by the package ecosystem to ensure that the package’s expectations are met before the package is used.
Build platform certification Build platforms are certified for their conformance to the SLSA requirements at the stated level.

The examples below suggest some ways that expectations and verification may be implemented for different, broadly defined, package ecosystems.

Example: Small software team
Term Example
Expectations Defined by the producer’s security personnel and stored in a database.
Provenance verification Performed automatically on cluster nodes before execution by querying the expectations database.
Build platform certification The build platform implementer follows secure design and development best practices, does annual penetration testing exercises, and self-certifies their conformance to SLSA requirements.
Example: Open source language distribution
Term Example
Expectations Defined separately for each package and stored in the package registry.
Provenance verification The language distribution registry verifies newly uploaded packages meet expectations before publishing them. Further, the package manager client also verifies expectations prior to installing packages.
Build platform certification Performed by the language ecosystem packaging authority.

Security levels

SLSA is organized into a series of levels that provide increasing supply chain security guarantees. This gives you confidence that software hasn’t been tampered with and can be securely traced back to its source.

This section is a descriptive overview of the SLSA levels and tracks, describing their intent. For the prescriptive requirements for each level, see Requirements. For a general overview of SLSA, see About SLSA.

Levels and tracks

SLSA levels are split into tracks. Each track has its own set of levels that measure a particular aspect of supply chain security. The purpose of tracks is to recognize progress made in one aspect of security without blocking on an unrelated aspect. Tracks also allow the SLSA spec to evolve: we can add more tracks without invalidating previous levels.

Track/Level Requirements Focus
Build L0 (none) (n/a)
Build L1 Provenance showing how the package was built Mistakes, documentation
Build L2 Signed provenance, generated by a hosted build platform Tampering after the build
Build L3 Hardened build platform Tampering during the build

Note: The previous version of the specification used a single unnamed track, SLSA 1–4. For version 1.0 the Source aspects were removed to focus on the Build track. A Source track may be added in future versions.

Build track

The SLSA build track describes increasing levels of trustworthiness and completeness in a package artifact’s provenance. Provenance describes what entity built the artifact, what process they used, and what the inputs were. The lowest level only requires the provenance to exist, while higher levels provide increasing protection against tampering of the build, the provenance, or the artifact.

The primary purpose of the build track is to enable verification that the artifact was built as expected. Consumers have some way of knowing what the expected provenance should look like for a given package and then compare each package artifact’s actual provenance to those expectations. Doing so prevents several classes of supply chain threats.

Each ecosystem (for open source) or organization (for closed source) defines exactly how this is implemented, including: means of defining expectations, what provenance format is accepted, whether reproducible builds are used, how provenance is distributed, when verification happens, and what happens on failure. Guidelines for implementers can be found in the requirements.

Build L0: No guarantees

Summary

No requirements—L0 represents the lack of SLSA.

Intended for

Development or test builds of software that are built and run on the same machine, such as unit tests.

Requirements

n/a

Benefits

n/a

Build L1: Provenance exists

Summary

Package has provenance showing how it was built. Can be used to prevent mistakes but is trivial to bypass or forge.

Intended for

Projects and organizations wanting to easily and quickly gain some benefits of SLSA—other than tamper protection—without changing their build workflows.

Requirements
  • Software Producer:
    • Follow a consistent build process so that others can form expectations about what a “correct” build looks like.
    • Run builds on a build platform that meets Build L1 requirements.
    • Distribute provenance to consumers, preferably using a convention determined by the package ecosystem.
  • Build platform:
    • Automatically generate provenance describing how the artifact was built, including: what entity built the package, what build process they used, and what the top-level input to the build were.
Benefits
  • Makes it easier for both producers and consumers to debug, patch, rebuild, and/or analyze the software by knowing its precise source version and build process.

  • With verification, prevents mistakes during the release process, such as building from a commit that is not present in the upstream repo.

  • Aids organizations in creating an inventory of software and build platforms used across a variety of teams.

Notes
  • Provenance may be incomplete and/or unsigned at L1. Higher levels require more complete and trustworthy provenance.

Build L2: Hosted build platform

Summary

Forging the provenance or evading verification requires an explicit “attack”, though this may be easy to perform. Deters unsophisticated adversaries or those who face legal or financial risk.

In practice, this means that builds run on a hosted platform that generates and signs1 the provenance.

Intended for

Projects and organizations wanting to gain moderate security benefits of SLSA by switching to a hosted build platform, while waiting for changes to the build platform itself required by Build L3.

Requirements

All of Build L1, plus:

  • Software producer:
    • Run builds on a hosted build platform that meets Build L2 requirements.
  • Build platform:
    • Generate and sign1 the provenance itself. This may be done during the original build, an after-the-fact reproducible build, or some equivalent system that ensures the trustworthiness of the provenance.
  • Consumer:
    • Validate the authenticity of the provenance.
Benefits

All of Build L1, plus:

  • Prevents tampering after the build through digital signatures1.

  • Deters adversaries who face legal or financial risk by evading security controls, such as employees who face risk of getting fired.

  • Reduces attack surface by limiting builds to specific build platforms that can be audited and hardened.

  • Allows large-scale migration of teams to supported build platforms early while further hardening work (Build L3) is done in parallel.

Build L3: Hardened builds

Summary

Forging the provenance or evading verification requires exploiting a vulnerability that is beyond the capabilities of most adversaries.

In practice, this means that builds run on a hardened build platform that offers strong tamper protection.

Intended for

Most software releases. Build L3 usually requires significant changes to existing build platforms.

Requirements

All of Build L2, plus:

  • Software producer:
    • Run builds on a hosted build platform that meets Build L3 requirements.
  • Build platform:
    • Implement strong controls to:
      • prevent runs from influencing one another, even within the same project.
      • prevent secret material used to sign the provenance from being accessible to the user-defined build steps.
Benefits

All of Build L2, plus:

  • Prevents tampering during the build—by insider threats, compromised credentials, or other tenants.

  • Greatly reduces the impact of compromised package upload credentials by requiring attacker to perform a difficult exploit of the build process.

  • Provides strong confidence that the package was built from the official source and build process.

  1. Alternate means of verifying the authenticity of the provenance are also acceptable.

Producing artifacts

This section covers the detailed technical requirements for producing artifacts at each SLSA level. The intended audience is platform implementers and security engineers.

For an informative description of the levels intended for all audiences, see Levels. For background, see Terminology. To better understand the reasoning behind the requirements, see Threats and mitigations.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

Overview

Build levels

In order to produce artifacts with a specific build level, responsibility is split between the Producer and Build platform. The build platform MUST strengthen the security controls in order to achieve a specific level while the producer MUST choose and adopt a build platform capable of achieving a desired build level, implementing any controls as specified by the chosen platform.

Implementer Requirement Degree L1L2L3
Producer Choose an appropriate build platform
Follow a consistent build process
Distribute provenance
Build platform Provenance generation Exists
Authentic
Unforgeable
Isolation strength Hosted
Isolated

Security Best Practices

While the exact definition of what constitutes a secure platform is beyond the scope of this specification, all implementations MUST use industry security best practices to be conformant to this specification. This includes, but is not limited to, using proper access controls, securing communications, implementing proper management of cryptographic secrets, doing frequent updates, and promptly fixing known vulnerabilities.

Various relevant standards and guides can be consulted for that matter such as the CIS Critical Security Controls.

Producer

A package’s producer is the organization that owns and releases the software. It might be an open-source project, a company, a team within a company, or even an individual.

NOTE: There were more requirements for producers in the initial draft version (v0.1) which impacted how a package can be built. These were removed in the v1.0 specification and will be reassessed and re-added as indicated in the future directions.

Choose an appropriate build platform

The producer MUST select a build platform that is capable of reaching their desired SLSA Build Level.

For example, if a producer wishes to produce a Build Level 3 artifact, they MUST choose a builder capable of producing Build Level 3 provenance.

Follow a consistent build process

The producer MUST build their artifact in a consistent manner such that verifiers can form expectations about the build process. In some implementations, the producer MAY provide explicit metadata to a verifier about their build process. In others, the verifier will form their expectations implicitly (e.g. trust on first use).

If a producer wishes to distribute their artifact through a package ecosystem that requires explicit metadata about the build process in the form of a configuration file, the producer MUST complete the configuration file and keep it up to date. This metadata might include information related to the artifact’s source repository and build parameters.

Distribute provenance

The producer MUST distribute provenance to artifact consumers. The producer MAY delegate this responsibility to the package ecosystem, provided that the package ecosystem is capable of distributing provenance.

Build Platform

A package’s build platform is the infrastructure used to transform the software from source to package. This includes the transitive closure of all hardware, software, persons, and organizations that can influence the build. A build platform is often a hosted, multi-tenant build service, but it could be a system of multiple independent rebuilders, a special-purpose build platform used by a single software project, or even an individual’s workstation. Ideally, one build platform is used by many different software packages so that consumers can minimize the number of trusted platforms. For more background, see Build Model.

The build platform is responsible for providing two things: provenance generation and isolation between builds. The Build level describes the degree to which each of these properties is met.

Provenance generation

The build platform is responsible for generating provenance describing how the package was produced.

The SLSA Build level describes the overall provenance integrity according to minimum requirements on its:

  • Completeness: What information is contained in the provenance?
  • Authenticity: How strongly can the provenance be tied back to the builder?
  • Accuracy: How resistant is the provenance generation to tampering within the build process?
RequirementDescriptionL1L2L3
Provenance Exists

The build process MUST generate provenance that unambiguously identifies the output package by cryptographic digest and describes how that package was produced. The format MUST be acceptable to the package ecosystem and/or consumer.

It is RECOMMENDED to use the SLSA Provenance format and associated suite because it is designed to be interoperable, universal, and unambiguous when used for SLSA. See that format’s documentation for requirements and implementation guidelines.

If using an alternate format, it MUST contain the equivalent information as SLSA Provenance at each level and SHOULD be bi-directionally translatable to SLSA Provenance.

  • Completeness: Best effort. The provenance at L1 SHOULD contain sufficient information to catch mistakes and simulate the user experience at higher levels in the absence of tampering. In other words, the contents of the provenance SHOULD be the same at all Build levels, but a few fields MAY be absent at L1 if they are prohibitively expensive to implement.
  • Authenticity: No requirements.
  • Accuracy: No requirements.
Provenance is Authentic

Authenticity: Consumers MUST be able to validate the authenticity of the provenance attestation in order to:

  • Ensure integrity: Verify that the digital signature of the provenance attestation is valid and the provenance was not tampered with after the build.
  • Define trust: Identify the build platform and other entities that are necessary to trust in order to trust the artifact they produced.

This SHOULD be through a digital signature from a private key accessible only to the build platform component that generated the provenance attestation.

While many constraints affect choice of signing methodologies, it is RECOMMENDED that build platforms use signing methodologies which improve the ability to detect and remediate key compromise, such as methods which rely on transparency logs or, when transparency isn’t appropriate, time stamping services.

Authenticity allows the consumer to trust the contents of the provenance attestation, such as the identity of the build platform.

Accuracy: The provenance MUST be generated by the control plane (i.e. within the trust boundary identified in the provenance) and not by a tenant of the build platform (i.e. outside the trust boundary), except as noted below.

  • The data in the provenance MUST be obtained from the build platform, either because the generator is the build platform or because the provenance generator reads the data directly from the build platform.
  • The build platform MUST have some security control to prevent tenants from tampering with the provenance. However, there is no minimum bound on the strength. The purpose is to deter adversaries who might face legal or financial risk from evading controls.
  • Exceptions for fields that MAY be generated by a tenant of the build platform:
    • The names and cryptographic digests of the output artifacts, i.e. subject in SLSA Provenance. See forge output digest of the provenance for explanation of why this is acceptable.
    • Any field that is not marked as REQUIRED for Build L2. For example, resolvedDependencies in SLSA Provenance MAY be tenant-generated at Build L2. Builders SHOULD document any such cases of tenant-generated fields.

Completeness: SHOULD be complete.

  • There MAY be external parameters that are not sufficiently captured in the provenance.
  • Completeness of resolved dependencies is best effort.
Provenance is Unforgeable

Accuracy: Provenance MUST be strongly resistant to forgery by tenants.

  • Any secret material used for authenticating the provenance, for example the signing key used to generate a digital signature, MUST be stored in a secure management system appropriate for such material and accessible only to the build service account.
  • Such secret material MUST NOT be accessible to the environment running the user-defined build steps.
  • Every field in the provenance MUST be generated or verified by the build platform in a trusted control plane. The user-controlled build steps MUST NOT be able to inject or alter the contents, except as noted in Provenance is Authentic. (Build L3 does not require additional fields beyond those of L2.)

Completeness: SHOULD be complete.

  • External parameters MUST be fully enumerated.
  • Completeness of resolved dependencies is best effort.

Note: This requirement was called “non-falsifiable” in the initial draft version (v0.1).

Isolation strength

The build platform is responsible for isolating between builds, even within the same tenant project. In other words, how strong of a guarantee do we have that the build really executed correctly, without external influence?

The SLSA Build level describes the minimum bar for isolation strength. For more information on assessing a build platform’s isolation strength, see Verifying build platforms.

RequirementDescriptionL1L2L3
Hosted

All build steps ran using a hosted build platform on shared or dedicated infrastructure, not on an individual’s workstation.

Examples: GitHub Actions, Google Cloud Build, Travis CI.

Isolated

The build platform ensured that the build steps ran in an isolated environment, free of unintended external influence. In other words, any external influence on the build was specifically requested by the build itself. This MUST hold true even between builds within the same tenant project.

The build platform MUST guarantee the following:

  • It MUST NOT be possible for a build to access any secrets of the build platform, such as the provenance signing key, because doing so would compromise the authenticity of the provenance.
  • It MUST NOT be possible for two builds that overlap in time to influence one another, such as by altering the memory of a different build process running on the same machine.
  • It MUST NOT be possible for one build to persist or influence the build environment of a subsequent build. In other words, an ephemeral build environment MUST be provisioned for each build.
  • It MUST NOT be possible for one build to inject false entries into a build cache used by another build, also known as “cache poisoning”. In other words, the output of the build MUST be identical whether or not the cache is used.
  • The build platform MUST NOT open services that allow for remote influence unless all such interactions are captured as externalParameters in the provenance.

There are no sub-requirements on the build itself. Build L3 is limited to ensuring that a well-intentioned build runs securely. It does not require that a build platform prevents a producer from performing a risky or insecure build. In particular, the “Isolated” requirement does not prohibit a build from calling out to a remote execution service or a “self-hosted runner” that is outside the trust boundary of the build platform.

NOTE: This requirement was split into “Isolated” and “Ephemeral Environment” in the initial draft version (v0.1).

NOTE: This requirement is not to be confused with “Hermetic”, which roughly means that the build ran with no network access. Such a requirement requires substantial changes to both the build platform and each individual build, and is considered in the future directions.

Distributing provenance

In order to make provenance for artifacts available after generation for verification, SLSA requires the distribution and verification of provenance metadata in the form of SLSA attestations.

This document provides specifications for distributing provenance, and the relationship between build artifacts and provenance (build attestations). It is primarily concerned with artifacts for ecosystems that distribute build artifacts, but some attention is also paid to ecosystems that distribute container images or only distribute source artifacts, as many of the same principles generally apply to any artifact or group of artifacts.

In addition, this document is primarily for the benefit of artifact distributors, to understand how they can adopt the distribution of SLSA provenance. It is primarily concerned with the means of distributing attestations and the relationship of attestations to build artifacts, and not with the specific format of the attestation itself.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

Background

The package ecosystem’s maintainers are responsible for reliably redistributing artifacts and provenance, making the producers’ expectations available to consumers, and providing tools to enable safe artifact consumption (e.g. whether an artifact meets its producer’s expectations).

Relationship between releases and attestations

Attestations SHOULD be bound to artifacts, not releases.

A single “release” of a project, package, or library might include multiple artifacts. These artifacts result from builds on different platforms, architectures or environments. The builds need not happen at roughly the same point in time and might even span multiple days.

It is often difficult or impossible to determine when a release is ‘finished’ because many ecosystems allow adding new artifacts to old releases when adding support for new platforms or architectures. Therefore, the set of attestations for a given release MAY grow over time as additional builds and attestations are created.

Thus, package ecosystems SHOULD support multiple individual attestations per release. At the time of a given build, the relevant provenance for that build can be added to the release, depending on the relationship to the given artifacts.

Relationship between artifacts and attestations

Package ecosystems SHOULD support a one-to-many relationship from build artifacts to attestations to ensure that anyone is free to produce and publish any attestation they might need. However, while there are lots of possible attestations that can have a relationship to a given artifact, in this context SLSA is primarily concerned with build attestations, i.e. provenance, and as such, this specification only considers build attestations, produced by the same maintainers as the artifacts themselves.

By providing provenance alongside an artifact in the manner specified by a given ecosystem, maintainers are considered to be ‘elevating’ these build attestations above all other possible attestations that could be provided by third parties for a given artifact. The ultimate goal is for maintainers to provide the provenance necessary for a repository to be able to verify some potential policy that requires a certain SLSA level for publication, not support the publication of arbitrary attestations by third parties.

As a result, this provenance SHOULD accompany the artifact at publish time, and package ecosystems SHOULD provide a way to map a given artifact to its corresponding attestations. The mappings can be either implicit (e.g. require a custom filename schema that uniquely identifies the provenance over other attestation types) or explicit (e.g. it could happen as a de-facto standard based on where the attestation is published).

The provenance SHOULD have a filename that is directly related to the build artifact filename. For example, for an artifact <filename>.<extension>, the attestation is <filename>.attestation or some similar extension (for example in-toto recommends <filename>.intoto.jsonl.)

Where attestations are published

There are a number of opportunities and venues to publish attestations during and after the build process. Producers MUST publish attestations in at least one place, and SHOULD publish attestations in more than one place:

  • Publish attestations alongside the source repository releases: If the source repository hosting provider offers an artifact “release” feature, such as GitHub releases or GitLab releases, producers SHOULD include provenance as part of such releases. This option has the benefit of requiring no changes to the package registry to support provenance formats, but has the disadvantage of putting the source repository hosting providing in the critical path for installers that want to verify policy at build-time.
  • Publish attestations alongside the artifact in the package registry: Many software repositories already support some variety of publishing 1:1 related files alongside an artifact, sometimes known as “sidecar files”. For example, PyPI supports publishing .asc files representing the PGP signature for an artifact with the same filename (but different extension). This option requires the mapping between artifact and attestation (or attestation vessel) to be 1:1.
  • Publish attestations elsewhere, record their existence in a transparency log: Once an attestation has been generated and published for a build, a hash of the attestation and a pointer to where it is indexed SHOULD be published to a third-party transparency log that exists outside the source repository and package registry. Not only are transparency logs such as Rekor from Sigstore guaranteed to be immutable, but they typically also make monitoring easier. Requiring the presence of the attestation in a monitored transparency log during verification helps ensure the attestation is trustworthy.

Combining these options gives us a process for bootstrapping SLSA adoption within an ecosystem, even if the package registry doesn’t support publishing attestations. First, interested projects modify their release process to produce SLSA provenance. Then, they publish that provenance to their source repository. Finally, they publish the provenance to the package registry, if and when the registry supports it.

Long-term, package registries SHOULD support uploading and distributing provenance alongside the artifact. This model is preferred for two reasons:

  • trust: clients already trust the package registry as the source of their artifacts, and don’t need to trust an additional service;
  • reliability: clients already depend on the package registry as part of their critical path, so distributing provenance via the registry avoids adding an additional point of failure.

Short term, consumers of build artifacts can bootstrap a manual policy by using the source repository only for projects that publish all artifacts and attestations to the source repository, and later extend this to all artifacts published to the package registry via the canonical installation tools once a given ecosystem supports them.

Immutability of attestations

Attestations SHOULD be immutable. Once an attestation is published as it corresponds to a given artifact, that attestation is immutable and cannot be overwritten later with a different attestation that refers to the same artifact. Instead, a new release (and new artifacts) SHOULD be created.

Format of the attestation

The provenance is available to the consumer in a format that the consumer accepts. The format SHOULD be in-toto SLSA Provenance, but another format MAY be used if both producer and consumer agree and it meets all the other requirements.

Considerations for source-based ecosystems

Some ecosystems have support for installing directly from source repositories (an option for Python/pip, Go, etc). In these cases, there is no need to publish or verify provenance because there is no “build” step that translates between a source repository and an artifact that is being installed.

However, for ecosystems that install from source repositories via some intermediary (e.g. Homebrew installing from GitHub release artifacts generated from the repository or GitHub Packages, Go installing through the Go module proxy), these ecosystems distribute “source archives” that are not the bit-for-bit identical form from version control. These intermediaries are transforming the original source repository in some way that constitutes a “build” and as a result SHOULD be providing build provenance for this “package”, and the recommendations outlined here apply.

Verifying build platforms

One of SLSA’s guiding principles is to “trust platforms, verify artifacts”. However, consumers cannot trust platforms to produce Build L3 artifacts and provenance unless they have some proof that the provenance is unforgeable and the builds are isolated.

This section describes the parts of a build platform that consumers SHOULD assess and provides sample questions consumers can ask when assessing a build platform. See also Threats & mitigations and the build model.

Threats

Adversary goal

The SLSA Build track defends against an adversary whose primary goal is to inject unofficial behavior into a package artifact while avoiding detection. Remember that verifiers only accept artifacts whose provenance matches expectations. To bypass this, the adversary tries to either (a) tamper with a legitimate build whose provenance already matches expectations, or (b) tamper with an illegitimate build’s provenance to make it match expectations.

More formally, if a build with external parameters P would produce an artifact with binary hash X and a build with external parameters P’ would produce an artifact with binary hash Y, they wish to produce provenance indicating a build with external parameters P produced an artifact with binary hash Y.

See threats D, E, F, and G for examples of specific threats.

Note: Platform abuse (e.g. running non-build workloads) and attacks against builder availability are out of scope of this document.

Adversary profiles

Consumers SHOULD also evaluate the build platform’s ability to defend against the following types of adversaries.

  1. Project contributors, who can:
    • Create builds on the build platform. These are the adversary’s controlled builds.
    • Modify one or more controlled builds’ external parameters.
    • Modify one or more controlled builds’ environments and run arbitrary code inside those environments.
    • Read the target build’s source repo.
    • Fork the target build’s source repo.
    • Modify a fork of the target build’s source repo and build from it.
  2. Project maintainer, who can:
    • Do everything listed under “project contributors”.
    • Create new builds under the target build’s project or identity.
    • Modify the target build’s source repo and build from it.
    • Modify the target build’s configuration.
  3. Build platform administrators, who can:
    • Do everything listed under “project contributors” and “project maintainers”.
    • Run arbitrary code on the build platform.
    • Read and modify network traffic.
    • Access the control plane’s cryptographic secrets.
    • Remotely access build environments (e.g. via SSH).

Build platform components

Consumers SHOULD consider at least these five elements of the build model when assessing build platforms for SLSA conformance: external parameters, control plane, build environments, caches, and outputs.

image

The following subsections detail these elements of the build model and give prompts for assessing a build platform’s ability to produce SLSA Build L3 provenance. The assessment SHOULD take into account the security model used to identify the transitive closure of the builder.id for the [provenance model], specifically around the platform’s boundaries, actors, and interfaces.

External parameters

External parameters are the external interface to the builder and include all inputs to the build process. Examples include the source to be built, the build definition/script to be executed, user-provided instructions to the control plane for how to create the build environment (e.g. which operating system to use), and any additional user-provided strings.

Prompts for assessing external parameters
  • How does the control plane process user-provided external parameters? Examples: sanitizing, parsing, not at all
  • Which external parameters are processed by the control plane and which are processed by the build environment?
  • What sort of external parameters does the control plane accept for build environment configuration?
  • How do you ensure that all external parameters are represented in the provenance?
  • How will you ensure that future design changes will not add additional external parameters without representing them in the provenance?

Control plane

The control plane is the build platform component that orchestrates each independent build execution. It is responsible for setting up each build and cleaning up afterwards. At SLSA Build L2+ the control plane generates and signs provenance for each build performed on the build platform. The control plane is operated by one or more administrators, who have privileges to modify the control plane.

Prompts for assessing the control plane
  • Administration

    • What are the ways an employee can use privileged access to influence a build or provenance generation? Examples: physical access, terminal access, access to cryptographic secrets
    • What controls are in place to detect or prevent the employee from abusing such access? Examples: two-person approvals, audit logging, workload identities
    • Roughly how many employees have such access?
    • How are privileged accounts protected? Examples: two-factor authentication, client device security policies
    • What plans do you have for recovering from security incidents and platform outages? Are they tested? How frequently?
  • Provenance generation

    • How does the control plane observe the build to ensure the provenance’s accuracy?
    • Are there situations in which the control plane will not generate provenance for a completed build? What are they?
  • Development practices

    • How do you track the control plane’s software and configuration? Example: version control
    • How do you build confidence in the control plane’s software supply chain? Example: SLSA L3+ provenance, build from source
    • How do you secure communications between builder components? Example: TLS with certificate transparency.
    • Are you able to perform forensic analysis on compromised build environments? How? Example: retain base images indefinitely
  • Creating build environments

    • How does the control plane share data with build environments? Example: mounting a shared file system partition
    • How does the control plane protect its integrity from build environments? Example: not mount its own file system partitions on build environments
    • How does the control plane prevent build environments from accessing its cryptographic secrets? Examples: dedicated secret storage, not mounting its own file system partitions to build environments, hardware security modules
  • Managing cryptographic secrets

    • How do you store the control plane’s cryptographic secrets?
    • Which parts of the organization have access to the control plane’s cryptographic secrets?
    • What controls are in place to detect or prevent employees abusing such access? Examples: two-person approvals, audit logging
    • How are secrets protected in memory? Examples: secrets are stored in hardware security modules and backed up in secure cold storage
    • How frequently are cryptographic secrets rotated? Describe the rotation process.
    • What is your plan for remediating cryptographic secret compromise? How frequently is this plan tested?

Build environment

The build environment is the independent execution context where the build takes place. In the case of a distributed build, the build environment is the collection of all execution contexts that run build steps. Each build environment must be isolated from the control plane and from all other build environments, including those running builds from the same tenant or project. Tenants are free to modify the build environment arbitrarily. Build environments must have a means to fetch input artifacts (source, dependencies, etc).

Prompts for assessing build environments
  • Isolation technologies

    • How are build environments isolated from the control plane and each other? Examples: VMs, containers, sandboxed processes
    • How is separation achieved between trusted and untrusted processes?
    • How have you hardened your build environments against malicious tenants? Examples: configuration hardening, limiting attack surface
    • How frequently do you update your isolation software?
    • What is your process for responding to vulnerability disclosures? What about vulnerabilities in your dependencies?
    • What prevents a malicious build from gaining persistence and influencing subsequent builds?
  • Creation and destruction

    • What operating system and utilities are available in build environments on creation? How were these elements chosen? Examples: A minimal Linux distribution with its package manager, OSX with HomeBrew
    • How long could a compromised build environment remain active in the build platform?
  • Network access

    • Are build environments able to call out to remote execution? If so, how do you prevent them from tampering with the control plane or other build environments over the network?
    • Are build environments able to open services on the network? If so, how do you prevent remote interference through these services?

Cache

Builders may have zero or more caches to store frequently used dependencies. Build environments may have either read-only or read-write access to caches.

Prompts for assessing caches
  • What sorts of caches are available to build environments?
  • How are those caches populated?
  • How are cache contents validated before use?

Output storage

Output Storage holds built artifacts and their provenance. Storage may either be shared between build projects or allocated separately per-project.

Prompts for assessing output storage
  • How do you prevent builds from reading or overwriting files that belong to another build? Example: authorization on storage
  • What processing, if any, does the control plane do on output artifacts?

Builder evaluation

Organizations can either self-attest to their answers or seek certification from a third-party auditor. Evidence for self-attestation should be published on the internet and can include information such as the security model defined as part of the provenance. Evidence submitted for third-party certification need not be published.

Verifying artifacts

SLSA uses provenance to indicate whether an artifact is authentic or not, but provenance doesn’t do anything unless somebody inspects it. SLSA calls that inspection verification, and this section describes recommendations for how to verify artifacts and their SLSA provenance.

This section is divided into several subsections. The first describes the process for verifying an artifact and its provenance against a set of expectations. The second describes how to form the expectations used to verify provenance. The third discusses architecture choices for where provenance verification can happen.

How to verify

Verification SHOULD include the following steps:

  • Ensuring that the builder identity is one of those in the map of trusted builder id’s to SLSA level.
  • Verifying the signature on the provenance envelope.
  • Ensuring that the values for buildType and externalParameters in the provenance match the expected values. The package ecosystem MAY allow an approved list of externalParameters to be ignored during verification. Any unrecognized externalParameters SHOULD cause verification to fail.

Threats covered by each step

See Terminology for an explanation of supply chain model and Threats & mitigations for a detailed explanation of each threat.

Note: This subsection assumes that the provenance is in the recommended provenance format. If it is not, then the verifier SHOULD perform equivalent checks on provenance fields that correspond to the ones referenced here.

Step 1: Check SLSA Build level

First, check the SLSA Build level by comparing the artifact to its provenance and the provenance to a preconfigured root of trust. The goal is to ensure that the provenance actually applies to the artifact in question and to assess the trustworthiness of the provenance. This mitigates some or all of threats “E”, “F”, “G”, and “H”, depending on SLSA Build level and where verification happens.

Once, when bootstrapping the verifier:

  • Configure the verifier’s roots of trust, meaning the recognized builder identities and the maximum SLSA Build level each builder is trusted up to. Different verifiers might use different roots of trust, but usually a verifier uses the same roots of trust for all packages. This configuration is likely in the form of a map from (builder public key identity, builder.id) to (SLSA Build level) drawn from the SLSA Conformance Program (coming soon).

    Example root of trust configuration

    The following snippet shows conceptually how a verifier’s roots of trust might be configured using made-up syntax.

    "slsaRootsOfTrust": [
        // A builder trusted at SLSA Build L3, using a fixed public key.
        {
            "publicKey": "HKJEwI...",
            "builderId": "https://somebuilder.example.com/slsa/l3",
            "slsaBuildLevel": 3
        },
        // A different builder that claims to be SLSA Build L3,
        // but this verifier only trusts it to L2.
        {
            "publicKey": "tLykq9...",
            "builderId": "https://differentbuilder.example.com/slsa/l3",
            "slsaBuildLevel": 2
        },
        // A builder that uses Sigstore for authentication.
        {
            "sigstore": {
                "root": "global",  // identifies fulcio/rekor roots
                "subjectAlternativeNamePattern": "https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v*.*.*"
            }
            "builderId": "https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v*.*.*",
            "slsaBuildLevel": 3,
        }
        ...
    ],
    

Given an artifact and its provenance:

  1. Verify the envelope’s signature using the roots of trust, resulting in a list of recognized public keys (or equivalent).
  2. Verify that statement’s subject matches the digest of the artifact in question.
  3. Verify that the predicateType is https://slsa.dev/provenance/v1.
  4. Look up the SLSA Build Level in the roots of trust, using the recognized public keys and the builder.id, defaulting to SLSA Build L1.

Resulting threat mitigation:

  • Threat “E”: SLSA Build L3 requires protection against compromise of the build process and provenance generation by an external adversary, such as persistence between builds or theft of the provenance signing key. In other words, SLSA Build L3 establishes that the provenance is accurate and trustworthy, assuming you trust the build platform.
    • IMPORTANT: SLSA Build L3 does not cover compromise of the build platform itself, such as by a malicious insider. Instead, verifiers SHOULD carefully consider which build platforms are added to the roots of trust. For advice on establishing trust in build platforms, see Verifying build platforms.
  • Threat “F”: SLSA Build L2 covers tampering of the artifact or provenance after the build. This is accomplished by verifying the subject and signature in the steps above.
  • Threat “G”: Verification by the consumer or otherwise outside of the package registry covers compromise of the registry itself. (Verifying within the registry at publication time is also valuable, but does not cover Threat “G” or “I”.)
  • Threat “I”: Verification by the consumer covers compromise of the package in transit. (Many ecosystems also address this threat using package signatures or checksums.)
    • NOTE: SLSA does not yet cover adversaries tricking a consumer to use an unintended package, such as through typosquatting. Those threats are discussed in more detail under Threat “H”.

Step 2: Check expectations

Next, check that the package’s provenance meets your expectations for that package in order to mitigate threat “D”.

In our threat model, the adversary has ability to invoke a build and to publish to the registry. The adversary is not able to write to the source repository, nor do they have insider access to any trusted systems. Your expectations SHOULD be sufficient to detect or prevent this adversary from injecting unofficial behavior into the package.

You SHOULD compare the provenance against expected values for at least the following fields:

What Why
Builder identity from Step 1 To prevent an adversary from building the correct code on an unintended platform
Canonical source repository To prevent an adversary from building from an unofficial fork (or other disallowed source)
buildType To ensure that externalParameters are interpreted as intended
externalParameters To prevent an adversary from injecting unofficial behavior

Verification tools SHOULD reject unrecognized fields in externalParameters to err on the side of caution. It is acceptable to allow a parameter to have a range of values (possibly any value) if it is known that any value in the range is safe. JSON comparison is sufficient for verifying parameters.

TIP: Difficulty in forming meaningful expectations about externalParameters can be a sign that the buildType’s level of abstraction is too low. For example, externalParameters that record a list of commands to run is likely impractical to verify because the commands change on every build. Instead, consider a buildType that defines the list of commands in a configuration file in a source repository, then put only the source repository in externalParameters. Such a design is easier to verify because the source repository is constant across builds.

Step 3: (Optional) Check dependencies recursively

Finally, recursively check the resolvedDependencies as available and to the extent desired. Note that SLSA v1.0 does not have any requirements on the completeness or verification of resolvedDependencies. However, one might wish to verify dependencies in order to mitigate dependency threats and protect against threats further up the supply chain. If resolvedDependencies is incomplete, these checks can be done on a best-effort basis.

A Verification Summary Attestation (VSA) can make dependency verification more efficient by recording the result of prior verifications. A trimming heuristic or exception mechanism is almost always necessary when verifying dependencies because there will be transitive dependencies that are SLSA Build L0. (For example, consider the compiler’s compiler’s compiler’s … compiler.)

Forming Expectations

Expectations are known provenance values that indicate the corresponding artifact is authentic. For example, a package ecosystem may maintain a mapping between package names and their canonical source repositories. That mapping constitutes a set of expectations.

Possible models for forming expectations include:

  • Trust on first use: Accept the first version of the package as-is. On each version update, compare the old provenance to the new provenance and alert on any differences. This can be augmented by having rules about what changes are benign, such as a parameter known to be safe or a heuristic about safe git branches or tags.

  • Defined by producer: The package producer tells the verifier what their expectations ought to be. In this model, the verifier SHOULD provide an authenticated communication mechanism for the producer to set the package’s expectations, and there SHOULD be some protection against an adversary unilaterally modifying them. For example, modifications might require two-party control, or consumers might have to accept each policy change (another form of trust on first use).

  • Defined in source: The source repository tells the verifier what their expectations ought to be. In this model, the package name is immutably bound to a source repository and all other external parameters are defined in the source repository. This is how the Go ecosystem works, for example, since the package name is the source repository location.

It is important to note that expectations are tied to a package name, whereas provenance is tied to an artifact. Different versions of the same package name will likely have different artifacts and therefore different provenance. Similarly, an artifact might have different names in different package ecosystems but use the same provenance file.

Architecture options

There are several options (non-mutually exclusive) for where provenance verification can happen: the package ecosystem at upload time, the consumers at download time, or via a continuous monitoring system. Each option comes with its own set of considerations, but all are valid and at least one SHOULD be used.

More than one component can verify provenance. For example, even if a package ecosystem verifies provenance, consumers who get artifacts from that package ecosystem might wish to verify provenance themselves for defense in depth. They can do so using either client-side verification tooling or by polling a monitor.

Package ecosystem

A package ecosystem is a set of rules and conventions governing how packages are distributed. Every package artifact has an ecosystem, whether it is formal or ad-hoc. Some ecosystems are formal, such as language distribution (e.g. Python/PyPA), operating system distribution (e.g. Debian/Apt), or artifact distribution (e.g. OCI). Other ecosystems are informal, such as a convention used within a company. Even ad-hoc distribution of software, such as through a link on a website, is considered an “ecosystem”. For more background, see Package Model.

During package upload, a package ecosystem can ensure that the artifact’s provenance matches the expected values for that package name’s provenance before accepting it into the package registry. This option is RECOMMENDED whenever possible because doing so benefits all of the package ecosystem’s clients.

The package ecosystem is responsible for making its expectations available to consumers, reliably redistributing artifacts and provenance, and providing tools to enable safe artifact consumption (e.g. whether an artifact meets expectations).

Consumer

A package artifact’s consumer is the organization or individual that uses the package artifact.

Consumers can form their own expectations for artifacts or use the default expectations provided by the package producer and/or package ecosystem. When forming their own expectations, the consumer uses client-side verification tooling to ensure that the artifact’s provenance matches their expectations for that package before use (e.g. during installation or deployment). Client-side verification tooling can be either standalone, such as slsa-verifier, or built into the package ecosystem client.

Monitor

A monitor is a service that verifies provenance for a set of packages and publishes the result of that verification. The set of packages verified by a monitor is arbitrary, though it MAY mimic the set of packages published through one or more package ecosystems. The monitor SHOULD publish its expectations for all the packages it verifies.

Consumers can continuously poll a monitor to detect artifacts that do not meet the monitor’s expectations. Detecting artifacts that fail verification is of limited benefit unless a human or automated system takes action in response to the failed verification.

Threats & mitigations

What follows is a comprehensive technical analysis of supply chain threats and their corresponding mitigations in SLSA. For an introduction to the supply chain threats that SLSA is aiming to protect against, see Supply chain threats.

The examples on this section are meant to:

  • Explain the reasons for each of the SLSA requirements.
  • Increase confidence that the SLSA requirements are sufficient to achieve the desired level of integrity protection.
  • Help implementers better understand what they are protecting against so that they can better design and implement controls.

Overview

Supply Chain Threats

This threat model covers the software supply chain, meaning the process by which software is produced and consumed. We describe and cluster threats based on where in the software development pipeline those threats occur, labeled (A) through (I). This is useful because priorities and mitigations mostly cluster along those same lines. Keep in mind that dependencies are highly recursive, so each dependency has its own threats (A) through (I), and the same for their dependencies, and so on. For a more detailed explanation of the supply chain model, see Terminology.

Importantly, producers and consumers face aggregate risk across all of the software they produce and consume, respectively. Many organizations produce and/or consume thousands of software packages, both first- and third-party, and it is not practical to rely on every individual team in the organization to do the right thing. For this reason, SLSA prioritizes mitigations that can be broadly adopted in an automated fashion, minimizing the chance of mistakes.

Source threats

A source integrity threat is a potential for an adversary to introduce a change to the source code that does not reflect the intent of the software producer. This includes modification of the source data at rest as well as insider threats, when an authorized individual introduces an unauthorized change.

(A) Producer

The producer of the software intentionally produces code that harms the consumer, or the producer otherwise uses practices that are not deserving of the consumer’s trust.

Software producer intentionally creates a malicious revision of the source

Threat: An producer intentionally creates a malicious revision with the intent of harming their consumers.

Mitigation: This kind of attack cannot be directly mitigated through SLSA controls. Tools like the OSSF Scorecard can help to quantify the risk of consuming artifacts from specific organizations, but do not fully remove it. Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices. When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party.

Example: The libxz attack and the intentional sabotage of colors.js and faker.js are clear examples of an organization with an otherwise good reputation intentionally producing malicious code.

(B) Modifying the source

An adversary without any special administrator privileges attempts to introduce a change counter to the declared intent of the source by following the producer’s official source control process.

Threats in this category can be mitigated by following source control management best practices.

(B1) Submit change without review
Directly submit without review

Threat: Submit code to the source repository without another person reviewing.

Mitigation: The producer can require pre-approval for all changes.

Example: Adversary directly pushes a change to a git repo’s main branch. Solution: The producer can configure branch protection rules on the main branch. A best practice would be to required pre-approval of any changes via a change management tool (such a as GitHub pull request).

Single actor controls multiple accounts

Threat: An actor is able to control multiple account and effectively approve their own code changes.

Mitigation: The produce must ensure that no actor is able to control or influence multiple accounts with review privileges.

Example: Adversary creates a pull request using a secondary account and approves it using their primary account.

Solution: The producer must require strongly authenticated user accounts and ensure that all accounts map to unique persons. A common vector for this attack is to take over a robot account with the permission to contribute code. Control of the robot and an actors own legitimate account is enough to exploit this vulnerability.

Use a robot account to submit change

Threat: Exploit a robot account that has the ability to submit changes without two-person review.

Mitigation: All changes require two-person review, even changes authored by robots.

Example: A file within the source repository is automatically generated by a robot, which is allowed to submit without review. Adversary compromises the robot and submits a malicious change. Solution: Require review for such changes.

Abuse of rule exceptions

Threat: Rule exceptions provide vector for abuse

Mitigation: Remove rule exceptions.

Example: The intent of a producer is to require two-person review on “all changes except for documentation changes,” defined as those only modifying .md files. Adversary submits a malicious executable named evil.md and a code review is not required due to the exception. Technically, the intent of the producer was followed and the produced malicious revision meets all defined policies. Solution: Do not allow such exceptions.

Highly-permissioned actor bypasses or disables controls

Threat: Trusted actor with “admin” privileges in a repository submits code by disabling existing controls.

Mitigation: All actors must be subject to same controls, whether or not they have administrator privileges. Changes to the controls themselves should require their own review process.

Example 1: A GitHub repository-level admin pushes a change without review, even though GitHub branch protection is enabled. Solution: The producer can modify the rule to disallow bypass by administrators, or move the rule to an organization-level ruleset.

Example 2: GitHub repository-level admin removes a branch requirement, pushes their change, then re-enables the requirement to cover their tracks. Solution: The producer can use higher-permission-level rulesets (such as organization-level) to prevent repository-level tampering.

(B2) Evade change management process
Modify code after review

Threat: Modify the code after it has been reviewed but before submission.

Mitigation: Source control platform invalidates approvals whenever the proposed change is modified.

Example: Source repository requires two-person review on all changes. Adversary sends an initial “good” pull request to a peer, who approves it. Adversary then modifies their proposal to contain “bad” code.

Solution: Configure the code review rules to require review of the most recent revision before submission. Resetting or “dismissing” votes on a PR introduces substantial friction to the process. Depending on the security posture of the source, the producer has a few choices to deal with this situation. They may:

  • Accept this risk. Code review is already expensive and the pros outweigh the cons here.
  • Dismiss reviews when new changes are added. This is a common outcome when expert code review is required.
  • Leave previous reviews intact, but require that “at least the last revision must be reviewed by someone.”
Submit a change that is unreviewable

Threat: Adversary crafts a change that is meaningless for a human to review that looks benign but is actually malicious.

Mitigation: Code review system ensures that all reviews are informed and meaningful.

Example: A proposed change updates a file, but the reviewer is only presented with a diff of the cryptographic hash, not of the file contents. Thus, the reviewer does not have enough context to provide a meaningful review. Solution: the code review system should present the reviewer with a content diff or some other information to make an informed decision.

Copy a reviewed change to another context

Threat: Get a change reviewed in one context and then transfer it to a different context.

Mitigation: Approvals are context-specific.

Example: MyPackage’s source repository requires two-person review. Adversary forks the repo, submits a change in the fork with review from a colluding colleague (who is not trusted by MyPackage), then merges the change back into the upstream repo. Solution: The merge should still require review, even though the fork was reviewed.

Commit graph attacks

Threat: Request review for a series of two commits, X and Y, where X is bad and Y is good. Reviewer thinks they are approving only the final Y state but they are also implicitly approving X.

Mitigation: The producer declares that only the final delta is considered approved. Intermediate revisions don’t count as being reviewed and are not added to the protected context (such as the main branch).

Example: Adversary sends a pull request containing malicious commit X and benign commit Y that undoes X. The produced diff of X + Y contains zero lines of changed code and the reviewer may not notice that X is malicious unless they review each commit in the request. If X is allowed to become reachable from the protected branch, the content may become available in secured contexts, such as developer machines and vulnerable to exploits.

Solution: The code review tool does not merge contributor-created commits, and instead merges a single new commit representing only the reviewed “changes from all commits.”

(B3) Render code review ineffective
Collude with another trusted person

Threat: Two trusted persons collude to author and approve a bad change.

Mitigation: The producer can arbitrarily increase friction of their policies to reduce risk, such as requiring additional, or more senior reviewers. The goal of policy here is to ensure that the approved changes match the intention of the producer for the source. Increasing the friction of the policies may make it harder to circumvent, but doing so has diminishing returns. Ultimately the producer will need to land upon a balanced risk profile that makes sense for their security posture.

Trick reviewer into approving bad code

Threat: Construct a change that looks benign but is actually malicious, a.k.a. “bugdoor.”

Mitigation: This threat is not currently addressed by SLSA.

Reviewer blindly approves changes

Threat: Reviewer approves changes without actually reviewing, a.k.a. “rubber stamping.”

Mitigation: This threat is not currently addressed by SLSA.

(C) Source code management

An adversary introduces a change to the source control repository through an administrative interface, or through a compromise of the underlying infrastructure.

Platform admin abuses privileges

Threat: Platform administrator abuses their privileges to bypass controls or to push a malicious version of the software.

Mitigation: The source platform must have controls in place to prevent and detect abusive behavior from administrators (e.g. two-person approvals for changes to the infrastructure, audit logging). A future Platform Operations Track may provide more specific guidance on how to secure the underlying platform.

Example 1: GitHostingService employee uses an internal tool to push changes to the MyPackage source repo.

Example 2: GitHostingService employee uses an internal tool to push a malicious version of the server to serve malicious versions of MyPackage sources to a specific CI/CD client but the regular version to everyone else, in order to hide tracks.

Example 3: GitHostingService employee uses an internal tool to push a malicious version of the server that includes a backdoor allowing specific users to bypass branch protections. Adversary then uses this backdoor to submit a change to MyPackage without review.

Exploit vulnerability in SCM

Threat: Exploit a vulnerability in the implementation of the source code management system to bypass controls.

Mitigation: This threat is not currently addressed by SLSA.

Build threats

A build integrity threat is a potential for an adversary to introduce behavior to an artifact without changing its source code, or to build from a source, dependency, and/or process that is not intended by the software producer.

The SLSA Build track mitigates these threats when the consumer verifies artifacts against expectations, confirming that the artifact they received was built in the expected manner.

(D) External build parameters

An adversary builds from a version of the source code that does not match the official source control repository, or changes the build parameters to inject behavior that was not intended by the official source.

The mitigation here is to compare the provenance against expectations for the package, which depends on SLSA Build L1 for provenance. (Threats against the provenance itself are covered by (E) and (F).)

Build from unofficial fork of code (expectations)

Threat: Build using the expected CI/CD process but from an unofficial fork of the code that may contain unauthorized changes.

Mitigation: Verifier requires the provenance’s source location to match an expected value.

Example: MyPackage is supposed to be built from GitHub repo good/my-package. Instead, it is built from evilfork/my-package. Solution: Verifier rejects because the source location does not match.

Build from unofficial branch or tag (expectations)

Threat: Build using the expected CI/CD process and source location, but checking out an “experimental” branch or similar that may contain code not intended for release.

Mitigation: Verifier requires that the provenance’s source branch/tag matches an expected value, or that the source revision is reachable from an expected branch.

Example: MyPackage’s releases are tagged from the main branch, which has branch protections. Adversary builds from the unprotected experimental branch containing unofficial changes. Solution: Verifier rejects because the source revision is not reachable from main.

Build from unofficial build steps (expectations)

Threat: Build the package using the proper CI/CD platform but with unofficial build steps.

Mitigation: Verifier requires that the provenance’s build configuration source matches an expected value.

Example: MyPackage is expected to be built by Google Cloud Build using the build steps defined in the source’s cloudbuild.yaml file. Adversary builds with Google Cloud Build, but using custom build steps provided over RPC. Solution: Verifier rejects because the build steps did not come from the expected source.

Build from unofficial parameters (expectations)

Threat: Build using the expected CI/CD process, source location, and branch/tag, but using a parameter that injects unofficial behavior.

Mitigation: Verifier requires that the provenance’s external parameters all match expected values.

Example 1: MyPackage is supposed to be built from the release.yml workflow. Adversary builds from the debug.yml workflow. Solution: Verifier rejects because the workflow parameter does not match the expected value.

Example 2: MyPackage’s GitHub Actions Workflow uses github.event.inputs to allow users to specify custom compiler flags per invocation. Adversary sets a compiler flag that overrides a macro to inject malicious behavior into the output binary. Solution: Verifier rejects because the inputs parameter was not expected.

Build from modified version of code modified after checkout (expectations)

Threat: Build from a version of the code that includes modifications after checkout.

Mitigation: Build platform pulls directly from the source repository and accurately records the source location in provenance.

Example: Adversary fetches from MyPackage’s source repo, makes a local commit, then requests a build from that local commit. Builder records the fact that it did not pull from the official source repo. Solution: Verifier rejects because the source repo does not match the expected value.

(E) Build process

An adversary introduces an unauthorized change to a build output through tampering of the build process; or introduces false information into the provenance.

These threats are directly addressed by the SLSA Build track.

Forge values of the provenance (other than output digest) (Build L2+)

Threat: Generate false provenance and get the trusted control plane to sign it.

Mitigation: At Build L2+, the trusted control plane generates all information that goes in the provenance, except (optionally) the output artifact hash. At Build L3+, this is hardened to prevent compromise even by determined adversaries.

Example 1 (Build L2): Provenance is generated on the build worker, which the adversary has control over. Adversary uses a malicious process to get the build platform to claim that it was built from source repo good/my-package when it was really built from evil/my-package. Solution: Builder generates and signs the provenance in the trusted control plane; the worker reports the output artifacts but otherwise has no influence over the provenance.

Example 2 (Build L3): Provenance is generated in the trusted control plane, but workers can break out of the container to access the signing material. Solution: Builder is hardened to provide strong isolation against tenant projects.

Forge output digest of the provenance (n/a)

Threat: The tenant-controlled build process sets output artifact digest (subject in SLSA Provenance) without the trusted control plane verifying that such an artifact was actually produced.

Mitigation: None; this is not a problem. Any build claiming to produce a given artifact could have actually produced it by copying it verbatim from input to output.2 (Reminder: Provenance is only a claim that a particular artifact was built, not that it was published to a particular registry.)

Example: A legitimate MyPackage artifact has digest abcdef and is built from source repo good/my-package. A malicious build from source repo evil/my-package claims that it built artifact abcdef when it did not. Solution: Verifier rejects because the source location does not match; the forged digest is irrelevant.

Compromise project owner (Build L2+)

Threat: An adversary gains owner permissions for the artifact’s build project.

Mitigation: The build project owner must not have the ability to influence the build process or provenance generation.

Example: MyPackage is built on Awesome Builder under the project “mypackage”. Adversary is an administrator of the “mypackage” project. Awesome Builder allows administrators to debug build machines via SSH. An adversary uses this feature to alter a build in progress.

Compromise other build (Build L3)

Threat: Perform a malicious build that alters the behavior of a benign build running in parallel or subsequent environments.

Mitigation: Builds are isolated from one another, with no way for one to affect the other or persist changes.

Example 1: A build platform runs all builds for project MyPackage on the same machine as the same Linux user. An adversary starts a malicious build that listens for another build and swaps out source files, then starts a benign build. The benign build uses the malicious build’s source files, but its provenance says it used benign source files. Solution: The build platform changes architecture to isolate each build in a separate VM or similar.

Example 2: A build platform uses the same machine for subsequent builds. An adversary first runs a build that replaces the make binary with a malicious version, then subsequently runs an otherwise benign build. Solution: The builder changes architecture to start each build with a clean machine image.

Steal cryptographic secrets (Build L3)

Threat: Use or exfiltrate the provenance signing key or some other cryptographic secret that should only be available to the build platform.

Mitigation: Builds are isolated from the trusted build platform control plane, and only the control plane has access to cryptographic secrets.

Example: Provenance is signed on the build worker, which the adversary has control over. Adversary uses a malicious process that generates false provenance and signs it using the provenance signing key. Solution: Builder generates and signs provenance in the trusted control plane; the worker has no access to the key.

Poison the build cache (Build L3)

Threat: Add a malicious artifact to a build cache that is later picked up by a benign build process.

Mitigation: Build caches must be isolate between builds to prevent such cache poisoning attacks.

Example: Build platform uses a build cache across builds, keyed by the hash of the source file. Adversary runs a malicious build that creates a “poisoned” cache entry with a falsified key, meaning that the value wasn’t really produced from that source. A subsequent build then picks up that poisoned cache entry.

Compromise build platform admin (verification)

Threat: An adversary gains admin permissions for the artifact’s build platform.

Mitigation: The build platform must have controls in place to prevent and detect abusive behavior from administrators (e.g. two-person approvals, audit logging).

Example: MyPackage is built on Awesome Builder. Awesome Builder allows engineers on-call to SSH into build machines to debug production issues. An adversary uses this access to modify a build in progress. Solution: Consumers do not accept provenance from the build platform unless they trust sufficient controls are in place to prevent abusing admin privileges.

(F) Artifact publication

An adversary uploads a package artifact that does not reflect the intent of the package’s official source control repository.

This is the most direct threat because it is the easiest to pull off. If there are no mitigations for this threat, then (D) and (E) are often indistinguishable from this threat.

Build with untrusted CI/CD (expectations)

Threat: Build using an unofficial CI/CD pipeline that does not build in the correct way.

Mitigation: Verifier requires provenance showing that the builder matched an expected value.

Example: MyPackage is expected to be built on Google Cloud Build, which is trusted up to Build L3. Adversary builds on SomeOtherBuildPlatform, which is only trusted up to Build L2, and then exploits SomeOtherBuildPlatform to inject malicious behavior. Solution: Verifier rejects because builder is not as expected.

Upload package without provenance (Build L1)

Threat: Upload a package without provenance.

Mitigation: Verifier requires provenance before accepting the package.

Example: Adversary uploads a malicious version of MyPackage to the package repository without provenance. Solution: Verifier rejects because provenance is missing.

Tamper with artifact after CI/CD (Build L1)

Threat: Take a benign version of the package, modify it in some way, then re-upload it using the original provenance.

Mitigation: Verifier checks that the provenance’s subject matches the hash of the package.

Example: Adversary performs a proper build, modifies the artifact, then uploads the modified version of the package to the repository along with the provenance. Solution: Verifier rejects because the hash of the artifact does not match the subject found within the provenance.

Tamper with provenance (Build L2)

Threat: Perform a build that would not meet expectations, then modify the provenance to make the expectations checks pass.

Mitigation: Verifier only accepts provenance with a valid cryptographic signature or equivalent proving that the provenance came from an acceptable builder.

Example: MyPackage is expected to be built by GitHub Actions from the good/my-package repo. Adversary builds with GitHub Actions from the evil/my-package repo and then modifies the provenance so that the source looks like it came from good/my-package. Solution: Verifier rejects because the cryptographic signature is no longer valid.

(G) Distribution channel

An adversary modifies the package on the package registry using an administrative interface or through a compromise of the infrastructure including modification of the package in transit to the consumer.

The distribution channel threats and mitigations look very similar to the Artifact Publication (F) threats and mitigations with the main difference being that these threats are mitigated by having the consumer perform verification.

The consumer’s actions may be simplified if (F) produces a VSA. In this case the consumer may replace provenance verification with VSA verification.

Build with untrusted CI/CD (expectations)

Threat: Replace the package with one built using an unofficial CI/CD pipeline that does not build in the correct way.

Mitigation: Verifier requires provenance showing that the builder matched an expected value or a VSA for corresponding resourceUri.

Example: MyPackage is expected to be built on Google Cloud Build, which is trusted up to Build L3. Adversary builds on SomeOtherBuildPlatform, which is only trusted up to Build L2, and then exploits SomeOtherBuildPlatform to inject malicious behavior. Adversary then replaces the original package within the repository with the malicious package. Solution: Verifier rejects because builder is not as expected.

Issue VSA from untrusted intermediary (expectations)

Threat: Have an unofficial intermediary issue a VSA for a malicious package.

Mitigation: Verifier requires VSAs to be issued by a trusted intermediary.

Example: Verifier expects VSAs to be issued by TheRepository. Adversary builds a malicious package and then issues a VSA of their own for the malicious package. Solution: Verifier rejects because they only accept VSAs from TheRepository which the adversary cannot issue since they do not have the corresponding signing key.

Upload package without provenance or VSA (Build L1)

Threat: Replace the original package with a malicious one without provenance.

Mitigation: Verifier requires provenance or a VSA before accepting the package.

Example: Adversary replaces MyPackage with a malicious version of MyPackage on the package repository and deletes existing provenance. Solution: Verifier rejects because provenance is missing.

Tamper with artifact after upload (Build L1)

Threat: Take a benign version of the package, modify it in some way, then replace it while retaining the original provenance or VSA.

Mitigation: Verifier checks that the provenance or VSA’s subject matches the hash of the package.

Example: Adversary performs a proper build, modifies the artifact, then replaces the modified version of the package in the repository and retains the original provenance. Solution: Verifier rejects because the hash of the artifact does not match the subject found within the provenance.

Tamper with provenance or VSA (Build L2)

Threat: Perform a build that would not meet expectations, then modify the provenance or VSA to make the expectations checks pass.

Mitigation: Verifier only accepts provenance or VSA with a valid cryptographic signature or equivalent proving that the provenance came from an acceptable builder or the VSA came from an expected verifier.

Example 1: MyPackage is expected to be built by GitHub Actions from the good/my-package repo. Adversary builds with GitHub Actions from the evil/my-package repo and then modifies the provenance so that the source looks like it came from good/my-package. Solution: Verifier rejects because the cryptographic signature is no longer valid.

Example 2: Verifier expects VSAs to be issued by TheRepository. Adversary builds a malicious package and then modifies the original VSA’s subject field to match the digest of the malicious package. Solution: Verifier rejects because the cryptographic signature is no longer valid.

Example 3: Adversary uploads a malicious package to repo/evil-package, getting a valid VSA for repo/evil-package. Adversary then replaces repo/my-package and its VSA with repo/evil-package and its VSA. Solution: Verifier rejects because the VSA resourceUri field lists repo/evil-package and not the expected repo/my-package.

Usage threats

A usage threat is a potential for an adversary to exploit behavior of the consumer.

(H) Package selection

The consumer requests a package that it did not intend.

Dependency confusion

Threat: Register a package name in a public registry that shadows a name used on the victim’s internal registry, and wait for a misconfigured victim to fetch from the public registry instead of the internal one.

TODO: fill out the rest of this subsection

Typosquatting

Threat: Register a package name that is similar looking to a popular package and get users to use your malicious package instead of the benign one.

Mitigation: This threat is not currently addressed by SLSA. That said, the requirement to make the source available can be a mild deterrent, can aid investigation or ad-hoc analysis, and can complement source-based typosquatting solutions.

(I) Usage

The consumer uses a package in an unsafe manner.

Improper usage

Threat: The software can be used in an insecure manner, allowing an adversary to compromise the consumer.

Mitigation: This threat is not addressed by SLSA, but may be addressed by efforts like Secure by Design.

Dependency threats

A dependency threat is a potential for an adversary to introduce unintended behavior in one artifact by compromising some other artifact that the former depends on at build time. (Runtime dependencies are excluded from the model, as noted below.)

Unlike other threat categories, dependency threats develop recursively through the supply chain and can only be exploited indirectly. For example, if application A includes library B as part of its build process, then a build or source threat to B is also a dependency threat to A. Furthermore, if library B uses build tool C, then a source or build threat to C is also a dependency threat to both A and B.

This version of SLSA does not explicitly address dependency threats, but we expect that a future version will. In the meantime, you can apply SLSA recursively to your dependencies in order to reduce the risk of dependency threats.

Build dependency

An adversary compromises the target artifact through one of its build dependencies. Any artifact that is present in the build environment and has the ability to influence the output is considered a build dependency.

Include a vulnerable dependency (library, base image, bundled file, etc.)

Threat: Statically link, bundle, or otherwise include an artifact that is compromised or has some vulnerability, causing the output artifact to have the same vulnerability.

Example: The C++ program MyPackage statically links libDep at build time. A contributor accidentally introduces a security vulnerability into libDep. The next time MyPackage is built, it picks up and includes the vulnerable version of libDep, resulting in MyPackage also having the security vulnerability.

Mitigation: TODO

Use a compromised build tool (compiler, utility, interpreter, OS package, etc.)

Threat: Use a compromised tool or other software artifact during the build process, which alters the build process and injects unintended behavior into the output artifact.

Example: MyPackage is a tarball containing an ELF executable, created by running /usr/bin/tar during its build process. An adversary compromises the tar OS package such that /usr/bin/tar injects a backdoor into every ELF executable it writes. The next time MyPackage is built, the build picks up the vulnerable tar package, which injects the backdoor into the resulting MyPackage artifact.

Mitigation: TODO

Reminder: dependencies that look like runtime dependencies actually become build dependencies if they get loaded at build time.

Use a compromised runtime dependency during the build (for tests, dynamic linking, etc.)

Threat: During the build process, use a compromised runtime dependency (such as during testing or dynamic linking), which alters the build process and injects unwanted behavior into the output.

NOTE: This is technically the same case as Use a compromised build tool. We call it out to remind the reader that runtime dependencies can become build dependencies if they are loaded during the build.

Example: MyPackage has a runtime dependency on package Dep, meaning that Dep is not included in MyPackage but required to be installed on the user’s machine at the time MyPackage is run. However, Dep is also loaded during the build process of MyPackage as part of a test. An adversary compromises Dep such that, when run during a build, it injects a backdoor into the output artifact. The next time MyPackage is built, it picks up and loads Dep during the build process. The malicious code then injects the backdoor into the new MyPackage artifact.

Mitigation: In addition to all the mitigations for build tools, you can often avoid runtime dependencies becoming build dependencies by isolating tests to a separate environment that does not have write access to the output artifact.

The following threats are related to “dependencies” but are not modeled as “dependency threats”.

Use a compromised dependency at runtime (modeled separately)

Threat: Load a compromised artifact at runtime, thereby compromising the user or environment where the software ran.

Example: MyPackage lists package Dep as a runtime dependency. Adversary publishes a compromised version of Dep that runs malicious code on the user’s machine when Dep is loaded at runtime. An end user installs MyPackage, which in turn installs the compromised version of Dep. When the user runs MyPackage, it loads and executes the malicious code from Dep.

Mitigation: N/A - This threat is not currently addressed by SLSA. SLSA’s threat model does not explicitly model runtime dependencies. Instead, each runtime dependency is considered a distinct artifact with its own threats.

Availability threats

An availability threat is a potential for an adversary to deny someone from reading a source and its associated change history, or from building a package.

SLSA v1.0 does not address availability threats, though future versions might.

(A)(B) Delete the code

Threat: Perform a build from a particular source revision and then delete that revision or cause it to get garbage collected, preventing anyone from inspecting the code.

Mitigation: Some system retains the revision and its version control history, making it available for inspection indefinitely. Users cannot delete the revision except as part of a transparent legal or privacy process.

Example: An adversary submits malicious code to the MyPackage GitHub repo, builds from that revision, then does a force push to erase that revision from history (or requests that GitHub delete the repo.) This would make the revision unavailable for inspection. Solution: Verifier rejects the package because it lacks a positive attestation showing that some system, such as GitHub, ensured retention and availability of the source code.

A dependency becomes temporarily or permanently unavailable to the build process

Threat: Unable to perform a build with the intended dependencies.

Mitigation: This threat is not currently addressed by SLSA. That said, some solutions to support hermetic and reproducible builds may also reduce the impact of this threat.

De-list artifact

Threat: The package registry stops serving the artifact.

Mitigation: N/A - This threat is not currently addressed by SLSA.

De-list provenance

Threat: The package registry stops serving the provenance.

Mitigation: N/A - This threat is not currently addressed by SLSA.

Verification threats

Threats that can compromise the ability to prevent or detect the supply chain security threats above.

Tamper with recorded expectations

Threat: Modify the verifier’s recorded expectations, causing the verifier to accept an unofficial package artifact.

Mitigation: Changes to recorded expectations requires some form of authorization, such as two-party review.

Example: The package ecosystem records its expectations for a given package name in a configuration file that is modifiable by that package’s producer. The configuration for MyPackage expects the source repository to be good/my-package. The adversary modifies the configuration to also accept evil/my-package, and then builds from that repository and uploads a malicious version of the package. Solution: Changes to the recorded expectations require two-party review.

Forge change metadata

Threat: Forge the change metadata to alter attribution, timestamp, or discoverability of a change.

Mitigation: Source control platform strongly authenticates actor identity, timestamp, and parent revisions.

Example: Adversary submits a git commit with a falsified author and timestamp, and then rewrites history with a non-fast-forward update to make it appear to have been made long ago. Solution: Consumer detects this by seeing that such changes are not strongly authenticated and thus not trustworthy.

Exploit cryptographic hash collisions

Threat: Exploit a cryptographic hash collision weakness to bypass one of the other controls.

Mitigation: Require cryptographically secure hash functions for commit checksums and provenance subjects, such as SHA-256.

Examples: Construct a benign file and a malicious file with the same SHA-1 hash. Get the benign file reviewed and then submit the malicious file. Alternatively, get the benign file reviewed and submitted and then build from the malicious file. Solution: Only accept cryptographic hashes with strong collision resistance.

SLSA Source Track

Outstanding TODOs

Open issues are tracked with the source-track label in the slsa-framework/slsa repository. Source Track issues are triaged on the SLSA Source Track project board.

Objective

The SLSA source track describes increasing levels of trustworthiness and completeness in a repository revision’s provenance (e.g. how it was generated, who the contributors were, etc…).

The Source track is scoped to revisions of a single repository that is controlled by an organization. That organization determines the intent of the software in the repository, what Source level should apply to the repository and administers technical controls to enforce that level.

The primary purpose of the Source track is to enable verification that the creation of a revision followed the expected process. Consumers can examine the various source provenance attestations to determine if all sources used during the build meet their requirements.

Definitions

Term Description
Source An identifiable set of text and binary files and associated metadata. Source is regularly used as input to a build system (see SLSA Build Track).
Organization A collection of people who collectively create the Source. Examples of organizations include open-source projects, a company, or a team within a company. The organization defines the goals and methods of the source.
Version Control System (VCS) Software for tracking and managing changes to source. Git and Subversion are examples of version control systems.
Revision A specific state of the source with an identifier provided by the version control system. As an example, you can identify a git revision by its tree hash.
Source Control System (SCS) A suite of tools and services (self-hosted or SaaS) relied upon by the organization to produce new revisions of the source. The role of the SCS may be fulfilled by a single service (e.g., GitHub / GitLab) or rely on a combination of services (e.g., GitLab with Gerrit code reviews, GitHub with OpenSSF Scorecard, etc).
Source Provenance Information about how a revision came to exist, where it was hosted, when it was generated, what process was used, who the contributors were, and what parent revisions it was based on.
Repository / Repo A uniquely identifiable instance of a VCS. The repository controls access to the Source in the VCS. The objective of a repository is to reflect the intent of the organization that controls it.
Branch A named pointer to a revision. Branches may be modified by authorized actors. Branches may have different security requirements.
Change A set of modifications to the source in a specific context. A change can be proposed and reviewed before being accepted.
Change History A record of the history of revisions that preceded a specific revision.
Push / upload / publish When an actor authenticates to a Repository to add or modify content. Typically makes a new revision reachable from a branch.
Review / approve / vote When an actor authenticates to a change review tool to comment upon, endorse, or reject a source change proposal.

Source Roles

Role Description
Administrator A human who can perform privileged operations on one or more projects. Privileged actions include, but are not limited to, modifying the change history and modifying project- or organization-wide security policies.
Trusted person A human who is authorized by the organization to propose and approve changes to the source.
Trusted robot Automation with an authentic identity that is authorized by the organization to propose and/or approve changes to the source.
Untrusted person A human who has limited access to the project. They MAY be able to read the source. They MAY be able to propose or review changes to the source. They MAY NOT approve changes to the source or perform any privileged actions on the project.
Proposer An actor that proposes (or uploads) a particular change to the source.
Reviewer / Voter / Approver An actor that reviews (or votes on) a particular change to the source.
Merger An actor that applies a change to the source. This actor may be the proposer.

Safe Expunging Process

SCSs MAY allow the organization to expunge (remove) content from a repository and its change history without leaving a public record of the removed content, but MUST only allow these changes in order to meet legal or privacy compliance requirements. Content changed under this process includes changing files, history, references, or any other metadata stored by the SCS.

Warning

Removing a revision from a repository is similar to deleting a package version from a registry: it’s almost impossible to estimate the amount of downstream supply chain impact. For example, in VCSs like Git, removal of a revision changes the object IDs of all subsequent revisions that were built on top of it, breaking downstream consumers’ ability to refer to source they’ve already integrated into their products.

It may be the case that the specific set of changes targeted by a legal takedown can be expunged in ways that do not impact consumed revisions, which can mitigate these problems.

It is also the case that removing content from a repository won’t necessarily remove it everywhere. The content may still exist in other copies of the repository, either in backups or on developer machines.

Process

An SCS MUST document the Safe Expunging Process and describe how requests and actions are tracked and SHOULD log the fact that content was removed. Different organizations and tech stacks may have different approaches to the problem.

SCSs SHOULD have technical mechanisms in place which require an Administrator plus, at least, one additional ‘trusted person’ to trigger any expunging (removals) made under this process.

The application of the safe expunging process and the resulting logs MAY be private to both prevent calling attention to potentially sensitive data (e.g. PII) or to comply with local laws and regulations which may require the change to be kept private to the extent possible. Organizations SHOULD prefer to make logs public if possible.

Levels

Level 1: Version controlled

Summary: The source is stored and managed through a modern version control system.

Intended for: Organizations currently storing source in non-standard ways who want to quickly gain some benefits of SLSA and better integrate with the SLSA ecosystem with minimal impact to their current workflows.

Benefits: Migrating to the appropriate tools is an important first step on the road to operational maturity.

Level 2: Branch History

Summary: Clarifies which branches in a repo are consumable and guarantees that all changes to protected branches are recorded.

Intended for: All organizations of any size producing software of any kind.

Benefits: Allows source consumers to track changes to the software over time and attribute those changes to the people that made them.

Level 3: Authenticatable and Auditable Provenance

Summary: The SCS generates credible, tamper-resistant, and contemporaneous evidence of how a specific revision was created. It is provided to authorized users of the source repository in a documented format. of how a specific revision was created to authorized users of the source repository.

Intended for: Organizations that want strong guarantees and auditability of their change management processes.

Benefits: Provides authenticatable and auditable information to policy enforcement tools and reduces the risk of tampering within the SCS’s storage systems.

System Requirements

Many examples in this document use the git version control system, but use of git is not a requirement to meet any level on the SLSA source track.

RequirementDescriptionL1L2L3
Use modern tools

The organization MUST manage the source using tools specifically designed to manage source code. Tools like git, Perforce, Subversion are great examples. They may be self-hosted or hosted in the cloud using vendors like GitLab, GitHub, Bitbucket, etc.

When self-hosting a solution, local, unauthenticated storage is not acceptable.

Branch protection is not required, nor are there any other constraints on the configuration of the tools.

Canonical location

The source MUST have a location where the “official” revisions are stored and managed.

Revisions are immutable and uniquely identifiable

This requirement ensures that a consumer can determine that the source revision they have is the same as a canonical revision. The SCS MUST provide a deterministic way to identify a particular revision.

Virtually all modern tools provide this guarantee via a combination of the repository ID and revision ID.

Repository IDs

The repository ID is defined by the SCS and MUST be unique in the context of that instance of the SCS.

Revision IDs

When the revision ID is a digest of the content of the revision (as in git) nothing more is needed. When the revision ID is a number or otherwise not a digest, then the SCS MUST document how the immutability of the revision is established. The same revision ID MAY be present in multiple repositories. See also Use cases for non-cryptographic, immutable, digests.

Branches

If the SCS supports multiple branches, the organization MUST indicate which branches are intended for consumption. This may be implied or explicit.

For example, an organization may declare that the default branch of a repo contains revisions intended for consumption my protected it.

They may also declare that branches named with the prefix refs/heads/releases/* contain revisions held to an even higher standard.

They may also declare all revisions are intended to be consumed “except those reachable only from branches beginning with refs/heads/users/*.” This is a typical setup for teams who leverage code review tools.

Continuity

For all branches intended for consumption, whenever a branch is updated to point to a new revision, that revision MUST document how it related to the previous revision. Exceptions are allowed via the safe expunging process.

It MUST NOT be possible to rewrite the history of branches intended for consumption. In other words, when updating the branch to point to a new revision, that revision must be a direct descendant of the current revision. In an SCS that hosts a Git repository on systems like GitHub or GitLab, this can be accomplished by enabling branch protection rules that prevent force pushes and branch deletions.

It MUST NOT be possible to delete the entire repository (including all branches) and replace it with different source. Exceptions are allowed via the safe expunging process.

Identity Management

There exists an identity management system or some other means of identifying actors. This system may be a federated authentication system (AAD, Google, Okta, GitHub, etc) or custom implementation (gittuf, gpg-signatures on commits, etc). The SCS MUST document how actors are identified for the purposes of attribution.

Activities conducted on the SCS SHOULD be attributed to authenticated identities.

Strong Authentication

User accounts that can modify the source or the project’s configuration must use multi-factor authentication or its equivalent. This strongly authenticated identity MUST be used for the generation of source provenance attestations. The SCS MUST declare which forms of identity it considers to be trustworthy for this purpose. For cloud-based SCSs, this will typically be the identity used to push to a repository.

Other forms of identity MAY be included as informational. Examples include a git commit’s “author” and “committer” fields and a gpg signature’s “user id.” These forms of identity are user-provided and not typically verified by the source provenance attestation issuer.

See source roles.

Source Provenance

Source Provenance are attestations that contain information about how a specific revision was created and how it came to exist in its present context (e.g. the branches or tags that point, or pointed, at that revision). They are associated with the revision identifier delivered to consumers and are a statement of fact from the perspective of the SCS.

At Source Level 3 Source Provenance MUST be created contemporaneously with the revision being made available such that they provide a credible, auditable, record of changes.

If a consumer is authorized to access source on a particular branch, they MUST be able to fetch the source attestation documents for revisions in the history of that branch.

It is possible that an SCS can make no claims about a particular revision. For example, this would happen if the revision was created on another SCS, or if the revision was not the result of an accepted change management process.

Enforced change management process

The SCS MUST ensure that all technical controls governing changes to a branch

  1. Are discoverable by authorized users of the repo.
  2. Cannot be bypassed except via the Safe Expunging Process.

For example, this could be accomplished by:

Change management tool requirements

The change management tool MUST be able to authoritatively state that each new revision reachable from the protected branch represents only the changes managed via the process.

RequirementDescriptionL1L2L3
Context

The change management tool MUST record the specific code change (a “diff” in git) or instructions to recreate it. In git, this typically defined to be three revision IDs: the tip of the “topic” branch, the tip of the target branch, and closest shared ancestor between the two (such as determined by git-merge-base).

The change management tool MUST record the “target” context for the change proposal and the previous revision in that context. For example, for the git version control system, the change management tool MUST record the branch name that was updated.

Branches may have differing security postures, and a change can be approved for one context while being unapproved for another.

Verified Timestamps

The change management tool MUST record timestamps for all contributions and review-related activities. User-provided timestamps MUST NOT be used.

Communicating source levels

SLSA source level details are communicated using attestations. These attestations either refer to a source revision itself or provide context needed to evaluate an attestation that does refer to a revision.

There are two broad categories of source attestations within the source track:

  1. Summary attestations: Used to communicate to downstream users what high level security properties a given source revision meets.
  2. Provenance attestations: Provide trustworthy, tamper-proof, metadata with the necessary information to determine what high level security properties a given source revision has.

To provide interoperability and ensure ease of use, it’s essential that the summary attestations are applicable across all Source Control Systems. Due to the significant differences in how SCSs operate and how they may chose to meet the Source Track requirements it is preferable to allow for flexibility with the full attestations. To that end SLSA leaves provenance attestations undefined and up to the SCSs to determine what works best in their environment.

Summary attestation

Summary attestations are issued by some authority that has sufficient evidence to make the determination of a given revision’s source level. Summary attestations convey properties about the revision as a whole and summarize properties computed over all the changes that contributed to that revision over its history.

The source track issues summary attestations using Verification Summary Attestations (VSAs) as follows:

  1. subject.uri SHOULD be set to a human readable URI of the revision.
  2. subject.digest MUST include the revision identifier (e.g. gitCommit) and MAY include other digests over the contents of the revision (e.g. gitTree, dirHash, etc…). SCSs that do not use cryptographic digests MUST define a canonical type that is used to identify immutable revisions (e.g. svn_revision_id)3.
  3. subject.annotations.source_branches SHOULD be set to a list of branches that pointed to this revision at any point in their history.
    • git branches MUST be fully qualified (e.g. refs/head/main) to reduce the likelihood of confusing downstream tooling.
  4. resourceUri MUST be set to the URI of the repository, preferably using SPDX Download Location. E.g. git+https://github.com/foo/hello-world.
  5. verifiedLevels MUST include the SLSA source track level the verifier asserts the revision meets. One of SLSA_SOURCE_LEVEL_0, SLSA_SOURCE_LEVEL_1, SLSA_SOURCE_LEVEL_2, SLSA_SOURCE_LEVEL_3. MAY include additional properties as asserted by the verifier. The verifier MUST include only the highest SLSA source level met by the revision.
  6. dependencyLevels MAY be empty as source revisions are typically terminal nodes in a supply chain.

Verifiers MAY issue these attestations based on their understanding of the underlying system (e.g. based on design docs, security reviews, etc…), but at SLSA Source Level 3 MUST use tamper-proof provenance attestations appropriate to their SCS when making the assessment.

The SLSA source track MAY create additional tags to include in verifiedLevels which attest to other properties of a revision (e.g. if it was code reviewed). All SLSA source tags will start with SLSA_SOURCE_.

Populating source_branches

The summary attestation issuer may choose to populate source_branches in any way they wish. Downstream users are expected to be familiar with the method used by the issuer.

Example implementations:

  • Issue a new VSA for each merged Pull Request and add the destination branch to source_branches.
  • Issue a new VSA each time a ‘consumable branch’ is updated to point to a new revision.
Example
"_type": "https://in-toto.io/Statement/v1",
"subject": [{
  "uri": "https://github.com/foo/hello-world/commit/9a04d1ee393b5be2773b1ce204f61fe0fd02366a",
  "digest": {"gitCommit": "9a04d1ee393b5be2773b1ce204f61fe0fd02366a"},
  "annotations": {"source_branches": ["refs/heads/main", "refs/heads/release_1.0"]}
}],

"predicateType": "https://slsa.dev/verification_summary/v1",
"predicate": {
  "verifier": {
    "id": "https://example.com/source_verifier",
  },
  "timeVerified": "1985-04-12T23:20:50.52Z",
  "resourceUri": "git+https://github.com/foo/hello-world",
  "policy": {
    "uri": "https://example.com/slsa_source.policy",
  },
  "verificationResult": "PASSED",
  "verifiedLevels": ["SLSA_SOURCE_LEVEL_3"],
}
How to verify
  • VSAs for source revisions MUST follow the standard method of VSA verification.
  • Users SHOULD check that an allowed branch is listed in subject.annotations.source_branches to ensure the revision is from an appropriate context within the repository.
  • Users SHOULD check that the expected SLSA_SOURCE_LEVEL_ is listed within verifiedLevels.
  • Users MUST ignore any unrecognized values in verifiedLevels.

Provenance attestations

Source provenance attestations provide tamper-proof evidence (ideally signed in-toto attestations) that can be used to determine what SLSA Source Level or other high level properties a given revision meets. This evidence can be used by an authority as the basis for issuing a Summary Attestation.

SCSs may have different methods of operating that necessitate different forms of evidence. E.g. GitHub-based workflows may need different evidence than Gerrit-based workflows, which would both likely be different from workflows that operate over Subversion repositories.

These differences also mean that depending on the configuration the issuers of provenance attestations may vary from implementation to implementation, often because entities with the knowledge to issue them may vary. The authority that issues summary-attestations MUST understand which entity should issue each provenance attestation type and ensure the full attestations come from the appropriate issuer.

‘Source provenance attestations’ is a generic term used to refer to any type of attestation that provides evidence the process used to create a revision.

Example source provenance attestations:

  • A TBD attestation which describes the revision’s parents and the actors involved in creating this revision.
  • A “code review” attestation which describes the basics of any code review that took place.
  • An “authentication” attestation which describes how the actors involved in any revision were authenticated.
  • A Vuln Scan attestation which describes the results of a vulnerability scan over the contents of the revision.
  • A Test Results attestation which describes the results of any tests run on the revision.
  • An SPDX attestation which provides a software bill of materials for the revision.
  • A SCAI attestation used to describe which source quality tools were run on the revision.

Provenance

To trace software back to the source and define the moving parts in a complex supply chain, provenance needs to be there from the very beginning. It’s the verifiable information about software artifacts describing where, when and how something was produced. For higher SLSA levels and more resilient integrity guarantees, provenance requirements are stricter and need a deeper, more technical understanding of the predicate.

This document defines the following predicate type within the in-toto attestation framework:

"predicateType": "https://slsa.dev/provenance/v1"

Important: Always use the above string for predicateType rather than what is in the URL bar. The predicateType URI will always resolve to the latest minor version of this specification. See parsing rules for more information.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.

Purpose

Describe how an artifact or set of artifacts was produced so that:

  • Consumers of the provenance can verify that the artifact was built according to expectations.
  • Others can rebuild the artifact, if desired.

This predicate is the RECOMMENDED way to satisfy the SLSA v1.0 provenance requirements.

Model

Provenance is an attestation that a particular build platform produced a set of software artifacts through execution of the buildDefinition.

Build Model

The model is as follows:

  • Each build runs as an independent process on a multi-tenant build platform. The builder.id identifies this platform, representing the transitive closure of all entities that are trusted to faithfully run the build and record the provenance. (Note: The same model can be used for platform-less or single-tenant build platforms.)

    • The build platform implementer SHOULD define a security model for the build platform in order to clearly identify the platform’s boundaries, actors, and interfaces. This model SHOULD then be used to identify the transitive closure of the trusted build platform for the builder.id as well as the trusted control plane.
  • The build process is defined by a parameterized template, identified by buildType. This encapsulates the process that ran, regardless of what platform ran it. Often the build type is specific to the build platform because most build platforms have their own unique interfaces.

  • All top-level, independent inputs are captured by the parameters to the template. There are two types of parameters:

    • externalParameters: the external interface to the build. In SLSA, these values are untrusted; they MUST be included in the provenance and MUST be verified downstream.

    • internalParameters: set internally by the platform. In SLSA, these values are trusted because the platform is trusted; they are OPTIONAL and need not be verified downstream. They MAY be included to enable reproducible builds, debugging, or incident response.

  • All artifacts fetched during initialization or execution of the build process are considered dependencies, including those referenced directly by parameters. The resolvedDependencies captures these dependencies, if known. For example, a build that takes a git repository URI as a parameter might record the specific git commit that the URI resolved to as a dependency.

  • During execution, the build process might communicate with the build platform’s control plane and/or build caches. This communication is not captured directly in the provenance, but is instead implied by builder.id and subject to SLSA Requirements. Such communication SHOULD NOT influence the definition of the build; if it does, it SHOULD go in resolvedDependencies instead.

  • Finally, the build process outputs one or more artifacts, identified by subject.

For concrete examples, see index of build types.

Parsing rules

This predicate follows the in-toto attestation parsing rules. Summary:

  • Consumers MUST ignore unrecognized fields unless otherwise noted.
  • The predicateType URI includes the major version number and will always change whenever there is a backwards incompatible change.
  • Minor version changes are always backwards compatible and “monotonic.” Such changes do not update the predicateType.
  • Unset, null, and empty field values MUST be interpreted equivalently.

Schema

Summary

NOTE: This summary (in cue) is informative. In the event of a disagreement with the text description, the text is authoritative.

{% include_relative schema/provenance.cue %}
Protocol buffer schema

NOTE: This summary (in protobuf) is informative. In the event of a disagreement with the text description, the text is authoritative.

Link: provenance.proto

NOTE: This protobuf definition prioritises being a human-readable summary of the schema for readers of the specification. A version of the protobuf definition useful for code generation is maintained in the in-toto attestation repository.

{% include_relative schema/provenance.proto %}

Provenance

NOTE: This subsection describes the fields within predicate. For a description of the other top-level fields, such as subject, see Statement.

REQUIRED for SLSA Build L1: buildDefinition, runDetails

FieldTypeDescription
buildDefinition BuildDefinition

The input to the build. The accuracy and completeness are implied by runDetails.builder.id.

runDetails RunDetails

Details specific to this particular execution of the build.

BuildDefinition

REQUIRED for SLSA Build L1: buildType, externalParameters

FieldTypeDescription
buildType string (TypeURI)

Identifies the template for how to perform the build and interpret the parameters and dependencies.

The URI SHOULD resolve to a human-readable specification that includes: overall description of the build type; schema for externalParameters and internalParameters; unambiguous instructions for how to initiate the build given this BuildDefinition, and a complete example. Example: https://slsa-framework.github.io/github-actions-buildtypes/workflow/v1

externalParameters object

The parameters that are under external control, such as those set by a user or tenant of the build platform. They MUST be complete at SLSA Build L3, meaning that that there is no additional mechanism for an external party to influence the build. (At lower SLSA Build levels, the completeness MAY be best effort.)

The build platform SHOULD be designed to minimize the size and complexity of externalParameters, in order to reduce fragility and ease verification. Consumers SHOULD have an expectation of what “good” looks like; the more information that they need to check, the harder that task becomes.

Verifiers SHOULD reject unrecognized or unexpected fields within externalParameters.

internalParameters object

The parameters that are under the control of the entity represented by builder.id. The primary intention of this field is for debugging, incident response, and vulnerability management. The values here MAY be necessary for reproducing the build. There is no need to verify these parameters because the build platform is already trusted, and in many cases it is not practical to do so.

resolvedDependencies array (ResourceDescriptor)

Unordered collection of artifacts needed at build time. Completeness is best effort, at least through SLSA Build L3. For example, if the build script fetches and executes “example.com/foo.sh”, which in turn fetches “example.com/bar.tar.gz”, then both “foo.sh” and “bar.tar.gz” SHOULD be listed here.

The BuildDefinition describes all of the inputs to the build. It SHOULD contain all the information necessary and sufficient to initialize the build and begin execution.

The externalParameters and internalParameters are the top-level inputs to the template, meaning inputs not derived from another input. Each is an arbitrary JSON object, though it is RECOMMENDED to keep the structure simple with string values to aid verification. The same field name SHOULD NOT be used for both externalParameters and internalParameters.

The parameters SHOULD only contain the actual values passed in through the interface to the build platform. Metadata about those parameter values, particularly digests of artifacts referenced by those parameters, SHOULD instead go in resolvedDependencies. The documentation for buildType SHOULD explain how to convert from a parameter to the dependency uri. For example:

"externalParameters": {
    "repository": "https://github.com/octocat/hello-world",
    "ref": "refs/heads/main"
},
"resolvedDependencies": [{
    "uri": "git+https://github.com/octocat/hello-world@refs/heads/main",
    "digest": {"gitCommit": "7fd1a60b01f91b314f59955a4e4d4e80d8edf11d"}
}]

Guidelines:

  • Maximize the amount of information that is implicit from the meaning of buildType. In particular, any value that is boilerplate and the same for every build SHOULD be implicit.

  • Reduce parameters by moving configuration to input artifacts whenever possible. For example, instead of passing in compiler flags via an external parameter that has to be verified separately, require the flags to live next to the source code or build configuration so that verifying the latter automatically verifies the compiler flags.

  • In some cases, additional external parameters might exist that do not impact the behavior of the build, such as a deadline or priority. These extra parameters SHOULD be excluded from the provenance after careful analysis that they indeed pose no security impact.

  • If possible, architect the build platform to use this definition as its sole top-level input, in order to guarantee that the information is sufficient to run the build.

  • When build configuration is evaluated client-side before being sent to the server, such as transforming version-controlled YAML into ephemeral JSON, some solution is needed to make verification practical. Consumers need a way to know what configuration is expected and the usual way to do that is to map it back to version control, but that is not possible if the server cannot verify the configuration’s origins. Possible solutions:

    • (RECOMMENDED) Rearchitect the build platform to read configuration directly from version control, recording the server-verified URI in externalParameters and the digest in resolvedDependencies.

    • Record the digest in the provenance4 and use a separate provenance attestation to link that digest back to version control. In this solution, the client-side evaluation is considered a separate “build” that SHOULD be independently secured using SLSA, though securing it can be difficult since it usually runs on an untrusted workstation.

  • The purpose of resolvedDependencies is to facilitate recursive analysis of the software supply chain. Where practical, it is valuable to record the URI and digest of artifacts that, if compromised, could impact the build. At SLSA Build L3, completeness is considered “best effort”.

RunDetails

REQUIRED for SLSA Build L1: builder

FieldTypeDescription
builder Builder

Identifies the build platform that executed the invocation, which is trusted to have correctly performed the operation and populated this provenance.

metadata BuildMetadata

Metadata about this particular execution of the build.

byproducts array (ResourceDescriptor)

Additional artifacts generated during the build that are not considered the “output” of the build but that might be needed during debugging or incident response. For example, this might reference logs generated during the build and/or a digest of the fully evaluated build configuration.

In most cases, this SHOULD NOT contain all intermediate files generated during the build. Instead, this SHOULD only contain files that are likely to be useful later and that cannot be easily reproduced.

Builder

REQUIRED for SLSA Build L1: id

FieldTypeDescription
id string (TypeURI)

URI indicating the transitive closure of the trusted build platform. This is intended to be the sole determiner of the SLSA Build level.

If a build platform has multiple modes of operations that have differing security attributes or SLSA Build levels, each mode MUST have a different builder.id and SHOULD have a different signer identity. This is to minimize the risk that a less secure mode compromises a more secure one.

The builder.id URI SHOULD resolve to documentation explaining:

  • The scope of what this ID represents.
  • The claimed SLSA Build level.
  • The accuracy and completeness guarantees of the fields in the provenance.
  • Any fields that are generated by the tenant-controlled build process and not verified by the trusted control plane, except for the subject.
  • The interpretation of any extension fields.
builderDependencies array (ResourceDescriptor)

Dependencies used by the orchestrator that are not run within the workload and that do not affect the build, but might affect the provenance generation or security guarantees.

version map (string→string)

Map of names of components of the build platform to their version.

The build platform, or builder for short, represents the transitive closure of all the entities that are, by necessity, trusted to faithfully run the build and record the provenance. This includes not only the software but the hardware and people involved in running the service. For example, a particular instance of Tekton could be a build platform, while Tekton itself is not. For more info, see Build model.

The id MUST reflect the trust base that consumers care about. How detailed to be is a judgement call. For example, GitHub Actions supports both GitHub-hosted runners and self-hosted runners. The GitHub-hosted runner might be a single identity because it’s all GitHub from the consumer’s perspective. Meanwhile, each self-hosted runner might have its own identity because not all runners are trusted by all consumers.

Consumers MUST accept only specific signer-builder pairs. For example, “GitHub” can sign provenance for the “GitHub Actions” builder, and “Google” can sign provenance for the “Google Cloud Build” builder, but “GitHub” cannot sign for the “Google Cloud Build” builder.

Design rationale: The builder is distinct from the signer in order to support the case where one signer generates attestations for more than one builder, as in the GitHub Actions example above. The field is REQUIRED, even if it is implicit from the signer, to aid readability and debugging. It is an object to allow additional fields in the future, in case one URI is not sufficient.

BuildMetadata

REQUIRED: (none)

FieldTypeDescription
invocationId string

Identifies this particular build invocation, which can be useful for finding associated logs or other ad-hoc analysis. The exact meaning and format is defined by builder.id; by default it is treated as opaque and case-sensitive. The value SHOULD be globally unique.

startedOn string (Timestamp)

The timestamp of when the build started.

finishedOn string (Timestamp)

The timestamp of when the build completed.

Extension fields

Implementations MAY add extension fields to any JSON object to describe information that is not captured in a standard field. Guidelines:

  • Extension fields SHOULD use names of the form <vendor>_<fieldname>, e.g. examplebuilder_isCodeReviewed. This practice avoids field name collisions by namespacing each vendor. Non-extension field names never contain an underscore.
  • Extension fields MUST NOT alter the meaning of any other field. In other words, an attestation with an absent extension field MUST be interpreted identically to an attestation with an unrecognized (and thus ignored) extension field.
  • Extension fields SHOULD follow the monotonic principle, meaning that deleting or ignoring the extension SHOULD NOT turn a DENY decision into an ALLOW.

Verification

Please see Verifying Artifacts for a detailed discussion of provenance verification.

Index of build types

The following is a partial index of build type definitions. Each contains a complete example predicate.

To add an entry here, please send a pull request on GitHub.

Migrating from 0.2

To migrate from version 0.2 (old), use the following pseudocode. The meaning of each field is unchanged unless otherwise noted.

{
    "buildDefinition": {
        // The `buildType` MUST be updated for v1.0 to describe how to
        // interpret `inputArtifacts`.
        "buildType": /* updated version of */ old.buildType,
        "externalParameters":
            old.invocation.parameters + {
            // It is RECOMMENDED to rename "entryPoint" to something more
            // descriptive.
            "entryPoint": old.invocation.configSource.entryPoint,
            // It is OPTIONAL to rename "source" to something more descriptive,
            // especially if "source" is ambiguous or confusing.
            "source": old.invocation.configSource.uri,
        },
        "internalParameters": old.invocation.environment,
        "resolvedDependencies":
            old.materials + [
            {
                "uri": old.invocation.configSource.uri,
                "digest": old.invocation.configSource.digest,
            }
        ]
    },
    "runDetails": {
        "builder": {
            "id": old.builder.id,
            "builderDependencies": null,  // not in v0.2
            "version": null,  // not in v0.2
        },
        "metadata": {
            "invocationId": old.metadata.buildInvocationId,
            "startedOn": old.metadata.buildStartedOn,
            "finishedOn": old.metadata.buildFinishedOn,
        },
        "byproducts": null,  // not in v0.2
    },
}

The following fields from v0.2 are no longer present in v1.0:

  • entryPoint: Use externalParameters[<name>] instead.
  • buildConfig: No longer inlined into the provenance. Instead, either:
    • If the configuration is a top-level input, record its digest in externalParameters["config"].
    • Else if there is a known use case for knowing the exact resolved build configuration, record its digest in byproducts. An example use case might be someone who wishes to parse the configuration to look for bad patterns, such as curl | bash.
    • Else omit it.
  • metadata.completeness: Now implicit from builder.id.
  • metadata.reproducible: Now implicit from builder.id.

Change history

v1.0

Major refactor to reduce misinterpretation, including a minor change in model.

  • Significantly expanded all documentation.
  • Altered the model slightly to better align with real-world build platforms, align with reproducible builds, and make verification easier.
  • Grouped fields into buildDefinition vs runDetails.
  • Renamed:
    • parameters -> externalParameters (slight change in semantics)
    • environment -> internalParameters (slight change in semantics)
    • materials -> resolvedDependencies (slight change in semantics)
    • buildInvocationId -> invocationId
    • buildStartedOn -> startedOn
    • buildFinishedOn -> finishedOn
  • Removed:
    • configSource: No longer special-cased. Now represented as externalParameters + resolvedDependencies.
    • buildConfig: No longer inlined into the provenance. Can be replaced with a reference in externalParameters or byproducts, depending on the semantics, or omitted if not needed.
    • completeness and reproducible: Now implied by builder.id.
  • Added:
    • ResourceDescriptor: annotations, content, downloadLocation, mediaType, name
    • Builder: builderDependencies and version
    • byproducts
  • Changed naming convention for extension fields.

Differences from RC1 and RC2:

  • Renamed systemParameters (RC1 + RC2) -> internalParameters (final).
  • Changed naming convention for extension fields (in RC2).
  • Renamed localName (RC1) -> name (RC2).
  • Added annotations and content (in RC2).

v0.2

Refactored to aid clarity and added buildConfig. The model is unchanged.

  • Replaced definedInMaterial and entryPoint with configSource.
  • Renamed recipe to invocation.
  • Moved invocation.type to top-level buildType.
  • Renamed arguments to parameters.
  • Added buildConfig, which can be used as an alternative to configSource to validate the configuration.

rename: slsa.dev/provenance

Renamed to “slsa.dev/provenance”.

v0.1.1

  • Added metadata.buildInvocationId.

v0.1

Initial version, named “in-toto.io/Provenance”

Verification Summary Attestation (VSA)

Verification summary attestations communicate that an artifact has been verified at a specific SLSA level and details about that verification.

This document defines the following predicate type within the in-toto attestation framework:

"predicateType": "https://slsa.dev/verification_summary/v1"

Important: Always use the above string for predicateType rather than what is in the URL bar. The predicateType URI will always resolve to the latest minor version of this specification. See parsing rules for more information.

Purpose

Describe what SLSA level an artifact or set of artifacts was verified at and other details about the verification process including what SLSA level the dependencies were verified at.

This allows software consumers to make a decision about the validity of an artifact without needing to have access to all of the attestations about the artifact or all of its transitive dependencies. They can use it to delegate complex policy decisions to some trusted party and then simply trust that party’s decision regarding the artifact.

It also allows software producers to keep the details of their build pipeline confidential while still communicating that some verification has taken place. This might be necessary for legal reasons (keeping a software supplier confidential) or for security reasons (not revealing that an embargoed patch has been included).

Model

A Verification Summary Attestation (VSA) is an attestation that some entity (verifier) verified one or more software artifacts (the subject of an in-toto attestation Statement) by evaluating the artifact and a bundle of attestations against some policy. Users who trust the verifier may assume that the artifacts met the indicated SLSA level without themselves needing to evaluate the artifact or to have access to the attestations the verifier used to make its determination.

The VSA also allows consumers to determine the verified levels of all of an artifact’s transitive dependencies. The verifier does this by either a) verifying the provenance of each non-source dependency listed in the resolvedDependencies of the artifact being verified (recursively) or b) matching the non-source dependency listed in resolvedDependencies (subject.digest == resolvedDependencies.digest and, ideally, vsa.resourceUri == resolvedDependencies.uri) to a VSA for that dependency and using vsa.verifiedLevels and vsa.dependencyLevels. Policy verifiers wishing to establish minimum requirements on dependencies SLSA levels may use vsa.dependencyLevels to do so.

Schema

// Standard attestation fields:
"_type": "https://in-toto.io/Statement/v1",
"subject": [{
  "name": <NAME>,
  "digest": { <digest-in-request> }
}],

// Predicate
"predicateType": "https://slsa.dev/verification_summary/v1",
"predicate": {
  "verifier": {
    "id": "<URI>",
    "version": {
      "<COMPONENT>": "<VERSION>",
      ...
    }
  },
  "timeVerified": <TIMESTAMP>,
  "resourceUri": <artifact-URI-in-request>,
  "policy": {
    "uri": "<URI>",
    "digest": { <digest-of-policy-data> }
  }
  "inputAttestations": [
    {
      "uri": "<URI>",
      "digest": { <digest-of-attestation-data> }
    },
    ...
  ],
  "verificationResult": "<PASSED|FAILED>",
  "verifiedLevels": ["<SlsaResult>"],
  "dependencyLevels": {
    "<SlsaResult>": <Int>,
    "<SlsaResult>": <Int>,
    ...
  },
  "slsaVersion": "<MAJOR>.<MINOR>",
}

Parsing rules

This predicate follows the in-toto attestation parsing rules. Summary:

  • Consumers MUST ignore unrecognized fields.
  • The predicateType URI includes the major version number and will always change whenever there is a backwards incompatible change.
  • Minor version changes are always backwards compatible and “monotonic.” Such changes do not update the predicateType.
  • Producers MAY add extension fields using field names that are URIs.

Fields

NOTE: This subsection describes the fields within predicate. For a description of the other top-level fields, such as subject, see Statement.

verifier object, required

Identifies the entity that performed the verification.

The identity MUST reflect the trust base that consumers care about. How detailed to be is a judgment call.

Consumers MUST accept only specific (signer, verifier) pairs. For example, “GitHub” can sign provenance for the “GitHub Actions” verifier, and “Google” can sign provenance for the “Google Cloud Deploy” verifier, but “GitHub” cannot sign for the “Google Cloud Deploy” verifier.

The field is required, even if it is implicit from the signer, to aid readability and debugging. It is an object to allow additional fields in the future, in case one URI is not sufficient.

verifier.id string (TypeURI), required

URI indicating the verifier’s identity.

verifier.version map (string->string), optional

Map of names of components of the verification platform to their version.

timeVerified string (Timestamp), optional

Timestamp indicating what time the verification occurred.

resourceUri string (ResourceURI), required

URI that identifies the resource associated with the artifact being verified.

The resourceUri SHOULD be set to the URI from which the producer expects the consumer to fetch the artifact for verification. This enables the consumer to easily determine the expected value when verifying. If the resourceUri is set to some other value, the producer MUST communicate the expected value, or how to determine the expected value, to consumers through an out-of-band channel.

policy object (ResourceDescriptor), required

Describes the policy that the subject was verified against.

The entry MUST contain a uri identifying which policy was applied and SHOULD contain a digest to indicate the exact version of that policy.

inputAttestations array (ResourceDescriptor), optional

The collection of attestations that were used to perform verification. Conceptually similar to the resolvedDependencies field in SLSA Provenance.

This field MAY be absent if the verifier does not support this feature. If non-empty, this field MUST contain information on all the attestations used to perform verification.

Each entry MUST contain a digest of the attestation and SHOULD contains a uri that can be used to fetch the attestation.

verificationResult string, required

Either “PASSED” or “FAILED” to indicate if the artifact passed or failed the policy verification.

verifiedLevels array (SlsaResult), required

Indicates the highest level of each track verified for the artifact (and not its dependencies), or “FAILED” if policy verification failed.

Users MUST NOT include more than one level per SLSA track. Note that each SLSA level implies all levels below it (e.g. SLSA_BUILD_LEVEL_3 implies SLSA_BUILD_LEVEL_2 and SLSA_BUILD_LEVEL_1), so there is no need to include more than one level per track.

dependencyLevels object, optional

A count of the dependencies at each SLSA level.

Map from SlsaResult to the number of the artifact’s transitive dependencies that were verified at the indicated level. Absence of a given level of SlsaResult MUST be interpreted as reporting 0 dependencies at that level. A set but empty dependencyLevels object means that the artifact has no dependency at all, while an unset or null dependencyLevels means that the verifier makes no claims about the artifact’s dependencies.

Users MUST count each dependency only once per SLSA track, at the highest level verified. For example, if a dependency meets SLSA_BUILD_LEVEL_2, you include it with the count for SLSA_BUILD_LEVEL_2 but not the count for SLSA_BUILD_LEVEL_1.

slsaVersion string, optional

Indicates the version of the SLSA specification that the verifier used, in the form <MAJOR>.<MINOR>. Example: 1.0. If unset, the default is an unspecified minor version of 1.x.

Example

WARNING: This is just for demonstration purposes.

"_type": "https://in-toto.io/Statement/v1",
"subject": [{
  "name": "out/example-1.2.3.tar.gz",
  "digest": {"sha256": "5678..."}
}],

// Predicate
"predicateType": "https://slsa.dev/verification_summary/v1",
"predicate": {
  "verifier": {
    "id": "https://example.com/publication_verifier",
    "version": {
      "slsa-verifier-linux-amd64": "v2.3.0",
      "slsa-framework/slsa-verifier/actions/installer": "v2.3.0"
    }
  },
  "timeVerified": "1985-04-12T23:20:50.52Z",
  "resourceUri": "https://example.com/example-1.2.3.tar.gz",
  "policy": {
    "uri": "https://example.com/example_tarball.policy",
    "digest": {"sha256": "1234..."}
  },
  "inputAttestations": [
    {
      "uri": "https://example.com/provenances/example-1.2.3.tar.gz.intoto.jsonl",
      "digest": {"sha256": "abcd..."}
    }
  ],
  "verificationResult": "PASSED",
  "verifiedLevels": ["SLSA_BUILD_LEVEL_3"],
  "dependencyLevels": {
    "SLSA_BUILD_LEVEL_3": 5,
    "SLSA_BUILD_LEVEL_2": 7,
    "SLSA_BUILD_LEVEL_1": 1,
  },
  "slsaVersion": "1.0"
}

How to verify

VSA consumers use VSAs to accomplish goals based on delegated trust. We call the process of establishing a VSA’s authenticity and determining whether it meets the consumer’s goals ‘verification’. Goals differ, as do levels of confidence in VSA producers, so the verification procedure changes to suit its context. However, there are certain steps that most verification procedures have in common.

Verification MUST include the following steps:

  1. Verify the signature on the VSA envelope using the preconfigured roots of trust. This step ensures that the VSA was produced by a trusted producer and that it hasn’t been tampered with.

  2. Verify the statement’s subject matches the digest of the artifact in question. This step ensures that the VSA pertains to the intended artifact.

  3. Verify that the predicateType is https://slsa.dev/verification_summary/v1. This step ensures that the in-toto predicate is using this version of the VSA format.

  4. Verify that the verifier matches the public key (or equivalent) used to verify the signature in step 1. This step identifies the VSA producer in cases where their identity is not implicitly revealed in step 1.

  5. Verify that the value for resourceUri in the VSA matches the expected value. This step ensures that the consumer is using the VSA for the producer’s intended purpose.

  6. Verify that the value for slsaResult is PASSED. This step ensures the artifact is suitable for the consumer’s purposes.

  7. Verify that verifiedLevels contains the expected value. This step ensures that the artifact is suitable for the consumer’s purposes.

Verification MAY additionally contain the following step:

  1. (Optional) Verify additional fields required to determine whether the VSA meets your goal.

Verification mitigates different threats depending on the VSA’s contents and the verification procudure.

IMPORTANT: A VSA does not protect against compromise of the verifier, such as by a malicious insider. Instead, VSA consumers SHOULD carefully consider which verifiers they add to their roots of trust.

Examples

  1. Suppose consumer C wants to delegate to verifier V the decision for whether to accept artifact A as resource R. Consumer C verifies that:

    • The signature on the VSA envelope using V’s public signing key from their preconfigured root of trust.

    • subject is A.

    • predicateType is https://slsa.dev/verification_summary/v1.

    • verifier.id is V.

    • resourceUri is R.

    • slsaResult is PASSED.

    • verifiedLevels contains SLSA_BUILD_LEVEL_UNEVALUATED.

    Note: This example is analogous to traditional code signing. The expected value for verifiedLevels is arbitrary but prenegotiated by the producer and the consumer. The consumer does not need to check additional fields, as C fully delegates the decision to V.

  2. Suppose consumer C wants to enforce the rule “Artifact A at resource R must have a passing VSA from verifier V showing it meets SLSA Build Level 2+.” Consumer C verifies that:

    • The signature on the VSA envelope using V’s public signing key from their preconfigured root of trust.

    • subject is A.

    • predicateType is https://slsa.dev/verification_summary/v1.

    • verifier.id is V.

    • resourceUri is R.

    • slsaResult is PASSED.

    • verifiedLevels is SLSA_BUILD_LEVEL_2 or SLSA_BUILD_LEVEL_3.

    Note: In this example, verifying the VSA mitigates the same threats as verifying the artifact’s SLSA provenance. See Verifying artifacts for details about which threats are addressed by verifying each SLSA level.

SlsaResult (String)

The result of evaluating an artifact (or set of artifacts) against SLSA. SHOULD be one of these values:

  • SLSA_BUILD_LEVEL_UNEVALUATED
  • SLSA_BUILD_LEVEL_0
  • SLSA_BUILD_LEVEL_1
  • SLSA_BUILD_LEVEL_2
  • SLSA_BUILD_LEVEL_3
  • FAILED (Indicates policy evaluation failed)

Note that each SLSA level implies the levels below it in the same track. For example, SLSA_BUILD_LEVEL_3 means (SLSA_BUILD_LEVEL_1 + SLSA_BUILD_LEVEL_2 + SLSA_BUILD_LEVEL_3).

Users MAY use custom values here but MUST NOT use custom values starting with SLSA_.

Change history

  • 1.1:
    • Changed the policy object to recommend that the digest field of the ResourceDescriptor is set.
    • Added optional verifier.version field to record verification tools.
    • Added Verification subsection with examples.
    • Made timeVerified optional.
  • 1.0:
    • Replaced materials with resolvedDependencies.
    • Relaxed SlsaResult to allow other values.
    • Converted to lowerCamelCase for consistency with SLSA Provenance.
    • Added slsaVersion field.
  • 0.2:
    • Added resource_uri field.
    • Added optional input_attestations field.
  • 0.1: Initial version.
  1. This resolution might include a version number, label, or some other selector in addition to the package name, but that is not important to SLSA.

  2. Technically this requires the artifact to be known to the adversary. If they only know the digest but not the actual contents, they cannot actually build the artifact without a preimage attack on the digest algorithm. However, even still there are no known concerns where this is a problem.

  3. in-toto attestations allow non-cryptographic digest types: https://github.com/in-toto/attestation/blob/main/spec/v1/digest_set.md#supported-algorithms.

  4. The externalParameters SHOULD reflect reality. If clients send the evaluated configuration object directly to the build server, record the digest directly in externalParameters. If clients upload the configuration object to a temporary storage location and send that location to the build server, record the location in externalParameters as a URI and record the uri and digest in resolvedDependencies.