Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v0.11] cherry-picks #3463

Merged
merged 39 commits into from
Jan 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
3ddf50f
fix indentation for in-toto and traces
tonistiigi Dec 16, 2022
fa35a54
add possibility to override filename for provenance
tonistiigi Dec 20, 2022
96fe451
provenance: move hermetic field into a correct struct
tonistiigi Dec 20, 2022
142df02
provenance: fix the order of the build steps
tonistiigi Dec 20, 2022
d077c19
sshforward: skip conn close on stream CloseSend.
sipsma Dec 21, 2022
fd72188
containerdexecutor: add network namespace callback
corhere Nov 2, 2022
fb422c0
frontend: fix testMultiStageImplicitFrom to account for busybox changes
thaJeztah Dec 27, 2022
b842260
feat: allow ignoring remote cache-export error if failing
JordanGoasdoue Dec 20, 2022
527b1a1
add cache stats to the build history API
tonistiigi Dec 28, 2022
c24fc28
Solve panic due to concurrent access to ExportSpans
gsaraf Aug 6, 2022
64366ba
gateway: add addition check to prevent content func from being forwarded
jedevc Dec 16, 2022
66371a0
attestations: propogate metadata through unbundling
jedevc Dec 14, 2022
8c600ad
attestations: ignore spdx parse errors
jedevc Dec 15, 2022
dbb58db
result: change reason types to strings
jedevc Dec 15, 2022
344d18b
exporter: make attestation validation public
jedevc Dec 15, 2022
1956651
attestation: validate attestations before unbundling as well
jedevc Dec 16, 2022
bd5eefc
attestation: forbid provenance attestations from frontend
jedevc Dec 16, 2022
8a5c070
vendor: update spdx/tools-golang to d6f58551be3f
jedevc Jan 4, 2023
0dec23a
docs: add slsa provenance documentation
jedevc Dec 8, 2022
a505382
docs: update provenance docs
tonistiigi Dec 19, 2022
d605e0e
docs: update hermetic field after it was moved in implementation
tonistiigi Dec 20, 2022
fba3610
docs: add filename to provenance attestation
tonistiigi Dec 20, 2022
d0300e5
docs: slsa editorial fixes
dvdksn Dec 22, 2022
dd51701
docs: moved slsa definitions to a separate page
dvdksn Dec 22, 2022
ef284aa
docs: slsa review updates
dvdksn Jan 3, 2023
17a6833
docs: add cross-linking between slsa pages
jedevc Jan 5, 2023
feef8de
docs: tidy up json examples for slsa definitions
jedevc Jan 5, 2023
82e0568
docs: rename slsa.md to slsa-provenance.md
jedevc Jan 5, 2023
eb58c40
docs: move attestation docs to dedicated directory
jedevc Jan 5, 2023
c67b3d7
docs: add index page for attestations
jedevc Jan 5, 2023
2d1b3ba
attestation: only supplement file data for the core scan
jedevc Dec 16, 2022
1c222b6
ociindex: refactor to hide implementation internally
jedevc Dec 14, 2022
98249a9
ociindex: add utility method for getting a single manifest from the i…
jedevc Dec 14, 2022
d08d4a4
progress: fix clean context cancelling
tonistiigi Dec 29, 2022
6de07e3
vendor: update fsutil to fb43384
tonistiigi Jan 6, 2023
dc43f74
llbsolver: fix panic when requesting provenance on nil result
tonistiigi Jan 6, 2023
0901e93
exporter: force enabling inline attestations for image export
jedevc Jan 6, 2023
09a94ed
exporter: allow configuring inline attestations for image exporters
jedevc Jan 6, 2023
4cbc411
testutil: pin busybox and alpine used in releases
tonistiigi Jan 6, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,7 @@ buildctl build ... \
* `compression=<uncompressed|gzip|estargz|zstd>`: choose compression type for layers newly created and cached, gzip is default value. estargz and zstd should be used with `oci-mediatypes=true`
* `compression-level=<value>`: choose compression level for gzip, estargz (0-9) and zstd (0-22)
* `force-compression=true`: forcibly apply `compression` option to all layers
* `ignore-error=<false|true>`: specify if error is ignored in case cache export fails (default: `false`)

`--import-cache` options:
* `type=registry`
Expand All @@ -415,6 +416,7 @@ The directory layout conforms to OCI Image Spec v1.0.
* `compression=<uncompressed|gzip|estargz|zstd>`: choose compression type for layers newly created and cached, gzip is default value. estargz and zstd should be used with `oci-mediatypes=true`.
* `compression-level=<value>`: compression level for gzip, estargz (0-9) and zstd (0-22)
* `force-compression=true`: forcibly apply `compression` option to all layers
* `ignore-error=<false|true>`: specify if error is ignored in case cache export fails (default: `false`)

`--import-cache` options:
* `type=local`
Expand Down Expand Up @@ -449,6 +451,7 @@ in your workflow to expose the runtime.
* `min`: only export layers for the resulting image
* `max`: export all the layers of all intermediate steps
* `scope=<scope>`: which scope cache object belongs to (default `buildkit`)
* `ignore-error=<false|true>`: specify if error is ignored in case cache export fails (default: `false`)

`--import-cache` options:
* `type=gha`
Expand Down Expand Up @@ -496,6 +499,7 @@ Others options are:
* `prefix=<prefix>`: set global prefix to store / read files on s3 (default: empty)
* `name=<manifest>`: specify name of the manifest to use (default `buildkit`)
* Multiple manifest names can be specified at the same time, separated by `;`. The standard use case is to use the git sha1 as name, and the branch name as duplicate, and load both with 2 `import-cache` commands.
* `ignore-error=<false|true>`: specify if error is ignored in case cache export fails (default: `false`)

`--import-cache` options:
* `type=s3`
Expand Down Expand Up @@ -540,6 +544,7 @@ There are 2 options supported for Azure Blob Storage authentication:
* `prefix=<prefix>`: set global prefix to store / read files on the Azure Blob Storage container (`<container>`) (default: empty)
* `name=<manifest>`: specify name of the manifest to use (default: `buildkit`)
* Multiple manifest names can be specified at the same time, separated by `;`. The standard use case is to use the git sha1 as name, and the branch name as duplicate, and load both with 2 `import-cache` commands.
* `ignore-error=<false|true>`: specify if error is ignored in case cache export fails (default: `false`)

`--import-cache` options:
* `type=azblob`
Expand Down
352 changes: 213 additions & 139 deletions api/services/control/control.pb.go

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion api/services/control/control.proto
Original file line number Diff line number Diff line change
Expand Up @@ -204,8 +204,9 @@ message BuildHistoryRecord {
int32 Generation = 12;
Descriptor trace = 13;
bool pinned = 14;
int32 numCachedSteps = 15;
int32 numTotalSteps = 16;
// TODO: tags
// TODO: steps/cache summary
// TODO: unclipped logs
}

Expand Down
108 changes: 108 additions & 0 deletions client/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,7 @@ func TestIntegration(t *testing.T) {
testBuildInfoInline,
testBuildInfoNoExport,
testZstdLocalCacheExport,
testCacheExportIgnoreError,
testZstdRegistryCacheImportExport,
testZstdLocalCacheImportExport,
testUncompressedLocalCacheImportExport,
Expand Down Expand Up @@ -4351,6 +4352,113 @@ func testZstdLocalCacheExport(t *testing.T, sb integration.Sandbox) {
require.Equal(t, dt[:4], []byte{0x28, 0xb5, 0x2f, 0xfd})
}

func testCacheExportIgnoreError(t *testing.T, sb integration.Sandbox) {
integration.CheckFeatureCompat(t, sb, integration.FeatureCacheExport)
c, err := New(sb.Context(), sb.Address())
require.NoError(t, err)
defer c.Close()

busybox := llb.Image("busybox:latest")
cmd := `sh -e -c "echo -n ignore-error > data"`

st := llb.Scratch()
st = busybox.Run(llb.Shlex(cmd), llb.Dir("/wd")).AddMount("/wd", st)

def, err := st.Marshal(sb.Context())
require.NoError(t, err)

tests := map[string]struct {
Exports []ExportEntry
CacheExports []CacheOptionsEntry
expectedErrors []string
}{
"local-ignore-error": {
Exports: []ExportEntry{
{
Type: ExporterLocal,
OutputDir: t.TempDir(),
},
},
CacheExports: []CacheOptionsEntry{
{
Type: "local",
Attrs: map[string]string{
"dest": "éèç",
},
},
},
expectedErrors: []string{"failed to solve", "contains value with non-printable ASCII characters"},
},
"registry-ignore-error": {
Exports: []ExportEntry{
{
Type: ExporterImage,
Attrs: map[string]string{
"name": "test-registry-ignore-error",
"push": "false",
},
},
},
CacheExports: []CacheOptionsEntry{
{
Type: "registry",
Attrs: map[string]string{
"ref": "fake-url:5000/myrepo:buildcache",
},
},
},
expectedErrors: []string{"failed to solve", "dial tcp: lookup fake-url", "no such host"},
},
"s3-ignore-error": {
Exports: []ExportEntry{
{
Type: ExporterLocal,
OutputDir: t.TempDir(),
},
},
CacheExports: []CacheOptionsEntry{
{
Type: "s3",
Attrs: map[string]string{
"endpoint_url": "http://fake-url:9000",
"bucket": "my-bucket",
"region": "us-east-1",
"access_key_id": "minioadmin",
"secret_access_key": "minioadmin",
"use_path_style": "true",
},
},
},
expectedErrors: []string{"failed to solve", "dial tcp: lookup fake-url", "no such host"},
},
}
ignoreErrorValues := []bool{true, false}
for _, ignoreError := range ignoreErrorValues {
ignoreErrStr := strconv.FormatBool(ignoreError)
for n, test := range tests {
require.Equal(t, 1, len(test.Exports))
require.Equal(t, 1, len(test.CacheExports))
require.NotEmpty(t, test.CacheExports[0].Attrs)
test.CacheExports[0].Attrs["ignore-error"] = ignoreErrStr
testName := fmt.Sprintf("%s-%s", n, ignoreErrStr)
t.Run(testName, func(t *testing.T) {
_, err = c.Solve(sb.Context(), def, SolveOpt{
Exports: test.Exports,
CacheExports: test.CacheExports,
}, nil)
if ignoreError {
require.NoError(t, err)
} else {
require.Error(t, err)
for _, errStr := range test.expectedErrors {
require.Contains(t, err.Error(), errStr)
}
}
})
}
}
}

func testUncompressedLocalCacheImportExport(t *testing.T, sb integration.Sandbox) {
integration.CheckFeatureCompat(t, sb, integration.FeatureCacheExport)
dir := t.TempDir()
Expand Down
151 changes: 99 additions & 52 deletions client/ociindex/ociindex.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,76 +4,94 @@ import (
"encoding/json"
"io"
"os"
"path"

"github.com/gofrs/flock"
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)

const (
// IndexJSONLockFileSuffix is the suffix of the lock file
IndexJSONLockFileSuffix = ".lock"
// indexFile is the name of the index file
indexFile = "index.json"

// lockFileSuffix is the suffix of the lock file
lockFileSuffix = ".lock"
)

// PutDescToIndex puts desc to index with tag.
// Existing manifests with the same tag will be removed from the index.
func PutDescToIndex(index *ocispecs.Index, desc ocispecs.Descriptor, tag string) error {
if index == nil {
index = &ocispecs.Index{}
type StoreIndex struct {
indexPath string
lockPath string
}

func NewStoreIndex(storePath string) StoreIndex {
indexPath := path.Join(storePath, indexFile)
return StoreIndex{
indexPath: indexPath,
lockPath: indexPath + lockFileSuffix,
}
if index.SchemaVersion == 0 {
index.SchemaVersion = 2
}

func (s StoreIndex) Read() (*ocispecs.Index, error) {
lock := flock.New(s.lockPath)
locked, err := lock.TryRLock()
if err != nil {
return nil, errors.Wrapf(err, "could not lock %s", s.lockPath)
}
if tag != "" {
if desc.Annotations == nil {
desc.Annotations = make(map[string]string)
}
desc.Annotations[ocispecs.AnnotationRefName] = tag
// remove existing manifests with the same tag
var manifests []ocispecs.Descriptor
for _, m := range index.Manifests {
if m.Annotations[ocispecs.AnnotationRefName] != tag {
manifests = append(manifests, m)
}
}
index.Manifests = manifests
if !locked {
return nil, errors.Errorf("could not lock %s", s.lockPath)
}
index.Manifests = append(index.Manifests, desc)
return nil
defer func() {
lock.Unlock()
os.RemoveAll(s.lockPath)
}()

b, err := os.ReadFile(s.indexPath)
if err != nil {
return nil, errors.Wrapf(err, "could not read %s", s.indexPath)
}
var idx ocispecs.Index
if err := json.Unmarshal(b, &idx); err != nil {
return nil, errors.Wrapf(err, "could not unmarshal %s (%q)", s.indexPath, string(b))
}
return &idx, nil
}

func PutDescToIndexJSONFileLocked(indexJSONPath string, desc ocispecs.Descriptor, tag string) error {
lockPath := indexJSONPath + IndexJSONLockFileSuffix
lock := flock.New(lockPath)
func (s StoreIndex) Put(tag string, desc ocispecs.Descriptor) error {
lock := flock.New(s.lockPath)
locked, err := lock.TryLock()
if err != nil {
return errors.Wrapf(err, "could not lock %s", lockPath)
return errors.Wrapf(err, "could not lock %s", s.lockPath)
}
if !locked {
return errors.Errorf("could not lock %s", lockPath)
return errors.Errorf("could not lock %s", s.lockPath)
}
defer func() {
lock.Unlock()
os.RemoveAll(lockPath)
os.RemoveAll(s.lockPath)
}()
f, err := os.OpenFile(indexJSONPath, os.O_RDWR|os.O_CREATE, 0644)

f, err := os.OpenFile(s.indexPath, os.O_RDWR|os.O_CREATE, 0644)
if err != nil {
return errors.Wrapf(err, "could not open %s", indexJSONPath)
return errors.Wrapf(err, "could not open %s", s.indexPath)
}
defer f.Close()

var idx ocispecs.Index
b, err := io.ReadAll(f)
if err != nil {
return errors.Wrapf(err, "could not read %s", indexJSONPath)
return errors.Wrapf(err, "could not read %s", s.indexPath)
}
if len(b) > 0 {
if err := json.Unmarshal(b, &idx); err != nil {
return errors.Wrapf(err, "could not unmarshal %s (%q)", indexJSONPath, string(b))
return errors.Wrapf(err, "could not unmarshal %s (%q)", s.indexPath, string(b))
}
}
if err = PutDescToIndex(&idx, desc, tag); err != nil {

if err = insertDesc(&idx, desc, tag); err != nil {
return err
}

b, err = json.Marshal(idx)
if err != nil {
return err
Expand All @@ -87,27 +105,56 @@ func PutDescToIndexJSONFileLocked(indexJSONPath string, desc ocispecs.Descriptor
return nil
}

func ReadIndexJSONFileLocked(indexJSONPath string) (*ocispecs.Index, error) {
lockPath := indexJSONPath + IndexJSONLockFileSuffix
lock := flock.New(lockPath)
locked, err := lock.TryRLock()
func (s StoreIndex) Get(tag string) (*ocispecs.Descriptor, error) {
idx, err := s.Read()
if err != nil {
return nil, errors.Wrapf(err, "could not lock %s", lockPath)
return nil, err
}
if !locked {
return nil, errors.Errorf("could not lock %s", lockPath)

for _, m := range idx.Manifests {
if t, ok := m.Annotations[ocispecs.AnnotationRefName]; ok && t == tag {
return &m, nil
}
}
defer func() {
lock.Unlock()
os.RemoveAll(lockPath)
}()
b, err := os.ReadFile(indexJSONPath)
return nil, nil
}

func (s StoreIndex) GetSingle() (*ocispecs.Descriptor, error) {
idx, err := s.Read()
if err != nil {
return nil, errors.Wrapf(err, "could not read %s", indexJSONPath)
return nil, err
}
var idx ocispecs.Index
if err := json.Unmarshal(b, &idx); err != nil {
return nil, errors.Wrapf(err, "could not unmarshal %s (%q)", indexJSONPath, string(b))

if len(idx.Manifests) == 1 {
return &idx.Manifests[0], nil
}
return &idx, nil
return nil, nil
}

// insertDesc puts desc to index with tag.
// Existing manifests with the same tag will be removed from the index.
func insertDesc(index *ocispecs.Index, desc ocispecs.Descriptor, tag string) error {
if index == nil {
return nil
}

if index.SchemaVersion == 0 {
index.SchemaVersion = 2
}
if tag != "" {
if desc.Annotations == nil {
desc.Annotations = make(map[string]string)
}
desc.Annotations[ocispecs.AnnotationRefName] = tag
// remove existing manifests with the same tag
var manifests []ocispecs.Descriptor
for _, m := range index.Manifests {
if m.Annotations[ocispecs.AnnotationRefName] != tag {
manifests = append(manifests, m)
}
}
index.Manifests = manifests
}
index.Manifests = append(index.Manifests, desc)
return nil
}
Loading