Skip to content

Commit

Permalink
Prepare for release of azblob (#16827)
Browse files Browse the repository at this point in the history
* Make azblob.BlobClient.downloadBlobToWriterAt public (#15899)

* azblob.downloadBlobToWriterAt: initialDownloadResponse: remove as unused

* azblob.DownloadBlobToWriterAt: make public

* Replace the advancer (#16354)

* Handle error response XML with whitespace (#16639)

* [chunkwriting] Return original buffer to the pool (#16067)

When the buffer isn't filled to capacity a new slice is created that's
smaller than the original. In that case the smaller slice is returned to
the pool which prevents the rest of the capacity from being used again.

The solution is to pass the original slice through and attach the
length. This allows the original slice to be returned to the pool once
the operation is complete.

This change also simplifies the `sendChunk` method, ensuring that the
buffer is returned to the TransferManager even when no bytes were read
from the reader.

* UploadStreamToBlockBlob: only require body be io.Reader (#15958)

This function only takes an io.Reader in azure-storage-blob-go,
& wal-g abstracts over multiple upload mechanisms which all operate on io.Reader

* uncomment o.Progress(progress) in getDownloadBlobOptions() (#16727)

* Updated azblob README sample code (#16721)

* Updated README sample code

container name should be all lowercase

* Fix typo GetAccountSASToken to GetSASToken

I believe 'GetAccountSASToken' was written by mistake. It's inconsistent with what is described on line 97 and I don't see this method in the documentation.

* [KeyVault] updating examples to use ClientSecretCredential, fix parsing keyID (#16689)

* updating examples to use ClientSecretCredential, fix parsing keyID

* fixing constructor test URL

* adding test for no keyid

* key version, not key ID, better error reporting

* changes from working with maor and daniel

* working for both hsm and non hsm, need to fix up for recorded tests

* fixed implementation, i think...

* working for hsm too

* working challenge policy for hsm and non hsm

* new recordings

* adding all final recordings

* reveerting back to DefaultAzCred

* using streaming package from azcore

* Add spell check warnings (#16656)

* Add spell check warnings

* Basic cspell.json

* Ignore files in .vscode except cspell.json, new line before EOF in cspell.json

* Spell check, ignore thyself

* Adding Smoketests to nightly runs (#16226)

* Adding Smoketests to nightly runs

* updating location, fixes to script

* starting go script

* finishing script

* updating yml file

* formatting

* adding snippet for finding go code

* adding funcitonality for copying examples

* trimming out unused funcs

* fixed regexp, thanks benbp

* fixing smoke test program to create go.mod file correctly, update powershell for nightly

* removing need for argument in go program, updating yml and powershell to reflect

* scripts not common

* smoketests, plural not singular

* finally got the right directory

* fixed script locally, running into permissions issue on ci

* updating script to exit properly, logging an error instead of panicing

* manually set go111module to on

* removing references to go111module

* issue with duplicated function names...

* updating to only pull examples from the service directory if one is provided

* runs samples now too!

* adding 'go run .' step to ps1, triggering for tables

* adding step to analyze.yml file

* adding debugging for ci

* updating to work in ci

* updating to specify go module name, removing print statements

* updating scripts to fmt for prettier printing, find all environment variables

* working on loading environment variables from file

* removing env vars from example_test.go for testing

* adding the environment variable portion to the generated main.go file

* forgot to remove change to nightly script

* adding import to the main file

* cleaning up code, adding comments

* don't import os if no env vars

* small changes for checking all packages

* removing _test suffix on copied files

* converting to use cobra for better support

* formatting

* Sync eng/common directory with azure-sdk-tools for PR 2464 (#16730)

* Support AAD graph and Microsoft Graph service principal APIs

* Consolidate service principal wrapper creation

Co-authored-by: Ben Broderick Phillips <[email protected]>

* Make EndpointToScope resilient to subdomains (#16737)

* Add user assigned managed identity example to docs (#16738)

* [KeyVault] fixing broken live test (#16752)

* fixing broken live test

* coverage

* forcing ci

* Sync eng/common directory with azure-sdk-tools for PR 2484 (#16753)

* Add weekly pipeline generation to prepare-pipelines template

* Add succeeded condition to pipeline generation pipelines

Co-authored-by: Ben Broderick Phillips <[email protected]>

* Release v61.1.0 1641448664 (#16762)

* Generated from specification/automation/resource-manager/readme.md tag package-2020-01-13-preview (commit hash: 3b9b0a930e29cbead33df69ae46c7080408e4c0f)

* Generated from specification/compute/resource-manager/readme.md tag package-2021-08-01 (commit hash: 3b9b0a930e29cbead33df69ae46c7080408e4c0f)

* v61.1.0

* Handle skipping docker build when PushImages is set and there is no dockerfile (#16555)

Co-authored-by: Ben Broderick Phillips <[email protected]>

* add new config (#16774)

* Add offline test for Azure Arc managed identity (#16771)

* Rename armmanagedapplications ci.yml files to avoid pipeline name definition collisions (#16770)

* Refactor IMDS discovery to remove probing, stop caching failures (#16267)

* Sync eng/common directory with azure-sdk-tools for PR 2500 (#16779)

* Update pipeline generator tool feed to azure-sdk-for-net

* Update pipeline generator tool version

Co-authored-by: Ben Broderick Phillips <[email protected]>

* feat: add generator cmd to generate or update readme.go.md file for track2 sdk param (#16780)

* [azservicebus] Updating to handle breaking changes in azcore (#16776)

Updating sb to handle breaking changes in azcore

At the moment we're not using azcore's HTTP library, so we don't need to test for specific errors that come from azcore.

* [Core] Bump version of internal (#16793)

<!--
Thank you for contributing to the Azure SDK for Go.

Please verify the following before submitting your PR, thank you!
-->

- [ ] The purpose of this PR is explained in this or a referenced issue.
- [ ] The PR does not update generated files.
   - These files are managed by the codegen framework at [Azure/autorest.go][].
- [ ] Tests are included and/or updated for code changes.
- [ ] Updates to [CHANGELOG.md][] are included.
- [ ] MIT license headers are included in each file.

[Azure/autorest.go]: https://github.com/Azure/autorest.go
[CHANGELOG.md]: https://github.com/Azure/azure-sdk-for-go/blob/main/CHANGELOG.md

* [Core] Update Changelog.md (#16794)

Update changelog for release

* Update CHANGELOG.md (#16795)

* Increment version for azcore releases (#16798)

Increment package version after release of azcore

* [azidentity] Making ChainedTokenCredential re-use the first successful credential (#16392)

This PR makes it so instances of `ChainedTokenCredential` will now re-use the first successful credential on `GetToken` calls.

Fixed #16268

* Align authentication errors with azcore.ResponseError (#16777)

* Increment version for azidentity releases (#16799)

Increment package version after release of azidentity

* [KeyVault] release ready for keyvault keys (#16731)

* release ready for keyvault keys

* updating the api surface to latest azcore

* updating to ResponseError

* update with latest codegen

* fixing ci

* formatting

* bumping azcore to v0.21.0

* updating azidentity and autorest version

* updating go.sum

* final upgrade for changelog

Co-authored-by: Joel Hendrix <[email protected]>

* [Tables] preparing tables for release (#16733)

* preparing tables for release

* prepping for release with latest azcore

* update with latest code generator

* formatting

* updating to released azcore version

* upgrading azidentity

* updating autorest.go version

* final changes to readme

Co-authored-by: Joel Hendrix <[email protected]>

* [KeyVault] prepare secrets for release (#16732)

* prepare secrets for release

* updating to latest azcore

* update with latest codegen

* fixing secret version issues

* formatting

* manual checkout of azkeys

* updating to azcore v0.21.0

* updating autorest.go and azidentity

* updating Changelog

* updating changelog

* updating changelog

* modifying moduleVersion

* undoing changes to keys and tables

Co-authored-by: Joel Hendrix <[email protected]>

* chore: pump codegen version in scripts (#16802)

* Update azblob with the latest azcore (#16784)

* Update azblob with the latest azcore

This tactially resolves the small number of breaking changes in azcore.

* clean-up

Co-authored-by: Benoit Perrot <[email protected]>
Co-authored-by: Mohit Sharma <[email protected]>
Co-authored-by: adreed-msft <[email protected]>
Co-authored-by: John Stairs <[email protected]>
Co-authored-by: Philip Dubé <[email protected]>
Co-authored-by: Brandon Kurtz <[email protected]>
Co-authored-by: Kim Ying <[email protected]>
Co-authored-by: Sean Kane <[email protected]>
Co-authored-by: Daniel Jurek <[email protected]>
Co-authored-by: Azure SDK Bot <[email protected]>
Co-authored-by: Ben Broderick Phillips <[email protected]>
Co-authored-by: Charles Lowell <[email protected]>
Co-authored-by: Jiahui Peng <[email protected]>
Co-authored-by: Dapeng Zhang <[email protected]>
Co-authored-by: Chenjie Shi <[email protected]>
Co-authored-by: Richard Park <[email protected]>
Co-authored-by: Daniel Rodríguez <[email protected]>
  • Loading branch information
18 people authored Jan 13, 2022
1 parent 0637c33 commit 3c495ee
Show file tree
Hide file tree
Showing 27 changed files with 248 additions and 1,098 deletions.
3 changes: 2 additions & 1 deletion sdk/storage/azblob/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# Release History

## 0.2.1 (Unreleased)
## 0.3.0 (Unreleased)

### Features Added

### Breaking Changes
* Updated to latest `azcore`. Public surface area is unchanged.

### Bugs Fixed

Expand Down
4 changes: 2 additions & 2 deletions sdk/storage/azblob/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ serviceClient, err := azblob.NewServiceClientWithSharedKey(fmt.Sprintf("https://
handle(err)
// Provide the convenience function with relevant info (services, resource types, permissions, and duration)
// The SAS token will be valid from this moment onwards.
accountSAS, err := serviceClient.GetAccountSASToken(AccountSASResourceTypes{Object: true, Service: true, Container: true},
accountSAS, err := serviceClient.GetSASToken(AccountSASResourceTypes{Object: true, Service: true, Container: true},
AccountSASPermissions{Read: true, List: true}, AccountSASServices{Blob: true}, time.Now(), time.Now().Add(48*time.Hour))
handle(err)
urlToSend := fmt.Sprintf("https://%s.blob.core.windows.net/?%s", accountName, accountSAS)
Expand Down Expand Up @@ -164,7 +164,7 @@ Three different clients are provided to interact with the various components of
// ===== 1. Creating a container =====

// First, branch off of the service client and create a container client.
container := service.NewContainerClient("myContainer")
container := service.NewContainerClient("mycontainer")
// Then, fire off a create operation on the container client.
// Note that, all service-side requests have an options bag attached, allowing you to specify things like metadata, public access types, etc.
// Specifying nil omits all options.
Expand Down
37 changes: 16 additions & 21 deletions sdk/storage/azblob/chunkwriting.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ type blockWriter interface {
// well, 4 MiB or 8 MiB, and auto-scale to as many goroutines within the memory limit. This gives a single dial to tweak and we can
// choose a max value for the memory setting based on internal transfers within Azure (which will give us the maximum throughput model).
// We can even provide a utility to dial this number in for customer networks to optimize their copies.
func copyFromReader(ctx context.Context, from io.ReadSeekCloser, to blockWriter, o UploadStreamToBlockBlobOptions) (BlockBlobCommitBlockListResponse, error) {
func copyFromReader(ctx context.Context, from io.Reader, to blockWriter, o UploadStreamToBlockBlobOptions) (BlockBlobCommitBlockListResponse, error) {
if err := o.defaults(); err != nil {
return BlockBlobCommitBlockListResponse{}, err
}
Expand Down Expand Up @@ -112,6 +112,7 @@ type copier struct {
type copierChunk struct {
buffer []byte
id string
length int
}

// getErr returns an error by priority. First, if a function set an error, it returns that error. Next, if the Context has an error
Expand All @@ -138,37 +139,31 @@ func (c *copier) sendChunk() error {
}

n, err := io.ReadFull(c.reader, buffer)
switch {
case err == nil && n == 0:
return nil
case err == nil:
if n > 0 {
// Some data was read, schedule the write.
id := c.id.next()
c.wg.Add(1)
c.o.TransferManager.Run(
func() {
defer c.wg.Done()
c.write(copierChunk{buffer: buffer[0:n], id: id})
c.write(copierChunk{buffer: buffer, id: id, length: n})
},
)
return nil
case err != nil && (err == io.EOF || err == io.ErrUnexpectedEOF) && n == 0:
return io.EOF
} else {
// Return the unused buffer to the manager.
c.o.TransferManager.Put(buffer)
}

if err == io.EOF || err == io.ErrUnexpectedEOF {
id := c.id.next()
c.wg.Add(1)
c.o.TransferManager.Run(
func() {
defer c.wg.Done()
c.write(copierChunk{buffer: buffer[0:n], id: id})
},
)
if err == nil {
return nil
} else if err == io.EOF || err == io.ErrUnexpectedEOF {
return io.EOF
}
if err := c.getErr(); err != nil {
return err

if cerr := c.getErr(); cerr != nil {
return cerr
}

return err
}

Expand All @@ -180,7 +175,7 @@ func (c *copier) write(chunk copierChunk) {
return
}
stageBlockOptions := c.o.getStageBlockOptions()
_, err := c.to.StageBlock(c.ctx, chunk.id, internal.NopCloser(bytes.NewReader(chunk.buffer)), stageBlockOptions)
_, err := c.to.StageBlock(c.ctx, chunk.id, internal.NopCloser(bytes.NewReader(chunk.buffer[:chunk.length])), stageBlockOptions)
if err != nil {
c.errCh <- fmt.Errorf("write error: %w", err)
return
Expand Down
4 changes: 2 additions & 2 deletions sdk/storage/azblob/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ module github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
go 1.16

require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.20.0
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.1
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.0
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dnaeon/go-vcr v1.2.0 // indirect
github.com/stretchr/testify v1.7.0
Expand Down
8 changes: 4 additions & 4 deletions sdk/storage/azblob/go.sum
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.20.0 h1:KQgdWmEOmaJKxaUUZwHAYh12t+b+ZJf8q3friycK1kA=
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.20.0/go.mod h1:ZPW/Z0kLCTdDZaDbYTetxc9Cxl/2lNqxYHYNOF2bti0=
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.1 h1:BUYIbDf/mMZ8945v3QkG3OuqGVyS4Iek0AOLwdRAYoc=
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.1/go.mod h1:KLF4gFr6DcKFZwSuH8w8yEK6DpFl3LP5rhdvAb7Yz5I=
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.0 h1:8wVJL0HUP5yDFXvotdewORTw7Yu88JbreWN/mobSvsQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.0/go.mod h1:fBF9PQNqB8scdgpZ3ufzaLntG0AG7C1WjPMsiFOmfHM=
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3 h1:E+m3SkZCN0Bf5q7YdTs5lSm2CYY3CK4spn5OmUIiQtk=
github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3/go.mod h1:KLF4gFr6DcKFZwSuH8w8yEK6DpFl3LP5rhdvAb7Yz5I=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
Expand Down
36 changes: 17 additions & 19 deletions sdk/storage/azblob/highlevel.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,14 @@ import (
"context"
"encoding/base64"
"fmt"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
"github.com/Azure/azure-sdk-for-go/sdk/internal/uuid"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal"
"io"
"net/http"
"sync"

"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
"github.com/Azure/azure-sdk-for-go/sdk/internal/uuid"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal"

"bytes"
"errors"
"os"
Expand Down Expand Up @@ -230,24 +231,21 @@ func (o *HighLevelDownloadFromBlobOptions) getDownloadBlobOptions(offSet, count
}
}

// downloadBlobToWriterAt downloads an Azure blob to a buffer with parallel.
func (b BlobClient) downloadBlobToWriterAt(ctx context.Context, offset int64, count int64, writer io.WriterAt, o HighLevelDownloadFromBlobOptions, initialDownloadResponse *DownloadResponse) error {
// DownloadBlobToWriterAt downloads an Azure blob to a WriterAt with parallel.
// Offset and count are optional, pass 0 for both to download the entire blob.
func (b BlobClient) DownloadBlobToWriterAt(ctx context.Context, offset int64, count int64, writer io.WriterAt, o HighLevelDownloadFromBlobOptions) error {
if o.BlockSize == 0 {
o.BlockSize = BlobDefaultDownloadBlockSize
}

if count == CountToEnd { // If size not specified, calculate it
if initialDownloadResponse != nil {
count = *initialDownloadResponse.ContentLength - offset // if we have the length, use it
} else {
// If we don't have the length at all, get it
downloadBlobOptions := o.getDownloadBlobOptions(0, CountToEnd, nil)
dr, err := b.Download(ctx, downloadBlobOptions)
if err != nil {
return err
}
count = *dr.ContentLength - offset
// If we don't have the length at all, get it
downloadBlobOptions := o.getDownloadBlobOptions(0, CountToEnd, nil)
dr, err := b.Download(ctx, downloadBlobOptions)
if err != nil {
return err
}
count = *dr.ContentLength - offset
}

if count <= 0 {
Expand Down Expand Up @@ -281,7 +279,7 @@ func (b BlobClient) downloadBlobToWriterAt(ctx context.Context, offset int64, co
rangeProgress = bytesTransferred
progressLock.Lock()
progress += diff
//o.Progress(progress)
o.Progress(progress)
progressLock.Unlock()
})
}
Expand All @@ -302,7 +300,7 @@ func (b BlobClient) downloadBlobToWriterAt(ctx context.Context, offset int64, co
// DownloadBlobToBuffer downloads an Azure blob to a buffer with parallel.
// Offset and count are optional, pass 0 for both to download the entire blob.
func (b BlobClient) DownloadBlobToBuffer(ctx context.Context, offset int64, count int64, _bytes []byte, o HighLevelDownloadFromBlobOptions) error {
return b.downloadBlobToWriterAt(ctx, offset, count, newBytesWriter(_bytes), o, nil)
return b.DownloadBlobToWriterAt(ctx, offset, count, newBytesWriter(_bytes), o)
}

// DownloadBlobToFile downloads an Azure blob to a local file.
Expand Down Expand Up @@ -336,7 +334,7 @@ func (b BlobClient) DownloadBlobToFile(ctx context.Context, offset int64, count
}

if size > 0 {
return b.downloadBlobToWriterAt(ctx, offset, size, file, o, nil)
return b.DownloadBlobToWriterAt(ctx, offset, size, file, o)
} else { // if the blob's size is 0, there is no need in downloading it
return nil
}
Expand Down Expand Up @@ -598,7 +596,7 @@ func (u *UploadStreamToBlockBlobOptions) getCommitBlockListOptions() *CommitBloc

// UploadStreamToBlockBlob copies the file held in io.Reader to the Blob at blockBlobClient.
// A Context deadline or cancellation will cause this to error.
func (bb BlockBlobClient) UploadStreamToBlockBlob(ctx context.Context, body io.ReadSeekCloser, o UploadStreamToBlockBlobOptions) (BlockBlobCommitBlockListResponse, error) {
func (bb BlockBlobClient) UploadStreamToBlockBlob(ctx context.Context, body io.Reader, o UploadStreamToBlockBlobOptions) (BlockBlobCommitBlockListResponse, error) {
if err := o.defaults(); err != nil {
return BlockBlobCommitBlockListResponse{}, err
}
Expand Down
1 change: 1 addition & 0 deletions sdk/storage/azblob/zc_blob_lease_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ package azblob
import (
"context"
"errors"

"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/internal/uuid"
)
Expand Down
3 changes: 2 additions & 1 deletion sdk/storage/azblob/zc_block_blob_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@ package azblob

import (
"context"
"io"

"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"io"
)

const (
Expand Down
19 changes: 17 additions & 2 deletions sdk/storage/azblob/zc_container_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ import (
"context"
"time"

"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"

"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
)
Expand Down Expand Up @@ -183,6 +185,13 @@ func (c ContainerClient) ListBlobsFlat(listOptions *ContainerListBlobFlatSegment
return pager
}

// override the advancer
pager.advancer = func(ctx context.Context, response ContainerListBlobFlatSegmentResponse) (*policy.Request, error) {
return c.client.listBlobFlatSegmentCreateRequest(ctx, &ContainerListBlobFlatSegmentOptions{
Marker: response.NextMarker,
})
}

// TODO: Come Here
//pager.err = func(response *azcore.Response) error {
// return handleError(c.client.listBlobFlatSegmentHandleError(response))
Expand All @@ -206,8 +215,14 @@ func (c ContainerClient) ListBlobsHierarchy(delimiter string, listOptions *Conta
return pager
}

// TODO: Come here
//p := pager.(*listBlobsHierarchySegmentResponsePager)
// override the advancer
pager.advancer = func(ctx context.Context, response ContainerListBlobHierarchySegmentResponse) (*policy.Request, error) {
return c.client.listBlobHierarchySegmentCreateRequest(ctx, delimiter, &ContainerListBlobHierarchySegmentOptions{
Marker: response.NextMarker,
})
}

// todo: come here
//p.errorer = func(response *azcore.Response) error {
// return handleError(c.client.listBlobHierarchySegmentHandleError(response))
//}
Expand Down
1 change: 1 addition & 0 deletions sdk/storage/azblob/zc_container_lease_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ package azblob
import (
"context"
"errors"

"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/internal/uuid"
)
Expand Down
5 changes: 3 additions & 2 deletions sdk/storage/azblob/zc_page_blob_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@ package azblob

import (
"context"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
"io"
"net/url"

"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
)

const (
Expand Down
47 changes: 31 additions & 16 deletions sdk/storage/azblob/zc_storage_error.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,13 @@ import (
"encoding/xml"
"errors"
"fmt"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"net/http"
"sort"
"strings"

"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
)

// InternalError is an internal error type that all errors get wrapped in.
Expand Down Expand Up @@ -49,7 +52,6 @@ func (e *InternalError) As(target interface{}) bool {
// TL;DR: This implements xml.Unmarshaler, and when the original StorageError is substituted, this unmarshaler kicks in.
// This handles the description and details. defunkifyStorageError handles the response, cause, and service code.
type StorageError struct {
raw string
response *http.Response
description string

Expand All @@ -58,8 +60,9 @@ type StorageError struct {
}

func handleError(err error) error {
if err, ok := err.(ResponseError); ok {
return &InternalError{defunkifyStorageError(err)}
var respErr *azcore.ResponseError
if errors.As(err, &respErr) {
return &InternalError{responseErrorToStorageError(respErr)}
}

if err != nil {
Expand All @@ -69,23 +72,31 @@ func handleError(err error) error {
return nil
}

// defunkifyStorageError is a function that takes the "funky" ResponseError and reduces it to a storageError.
func defunkifyStorageError(responseError ResponseError) error {
if err, ok := responseError.Unwrap().(*StorageError); ok {
// errors.Unwrap(responseError.Unwrap())

err.response = responseError.RawResponse()
// converts an *azcore.ResponseError to a *StorageError, or if that fails, a *InternalError
func responseErrorToStorageError(responseError *azcore.ResponseError) error {
var storageError StorageError
body, err := runtime.Payload(responseError.RawResponse)
if err != nil {
goto Default
}
if len(body) > 0 {
if err := xml.Unmarshal(body, &storageError); err != nil {
goto Default
}
}

err.ErrorCode = StorageErrorCode(responseError.RawResponse().Header.Get("x-ms-error-code"))
storageError.response = responseError.RawResponse

if code, ok := err.details["Code"]; ok {
err.ErrorCode = StorageErrorCode(code)
delete(err.details, "Code")
}
storageError.ErrorCode = StorageErrorCode(responseError.RawResponse.Header.Get("x-ms-error-code"))

return err
if code, ok := storageError.details["Code"]; ok {
storageError.ErrorCode = StorageErrorCode(code)
delete(storageError.details, "Code")
}

return &storageError

Default:
return &InternalError{
cause: responseError,
}
Expand Down Expand Up @@ -193,8 +204,12 @@ func (e *StorageError) UnmarshalXML(d *xml.Decoder, start xml.StartElement) (err
switch tt := t.(type) {
case xml.StartElement:
tokName = tt.Name.Local
case xml.EndElement:
tokName = ""
case xml.CharData:
switch tokName {
case "":
continue
case "Message":
e.description = string(tt)
default:
Expand Down
Loading

0 comments on commit 3c495ee

Please sign in to comment.