From 79f06e1755ca829a58ac78e70791ab5595a1ee0c Mon Sep 17 00:00:00 2001 From: Tamer Sherif <69483382+tasherif-msft@users.noreply.github.com> Date: Fri, 7 Jul 2023 10:27:29 -0700 Subject: [PATCH] Feedback + Service Client (#21096) * Enable gocritic during linting (#20715) Enabled gocritic's evalOrder to catch dependencies on undefined behavior on return statements. Updated to latest version of golangci-lint. Fixed issue in azblob flagged by latest linter. * Cosmos DB: Enable merge support (#20716) * Adding header and value * Wiring and tests * format * Fixing value * change log * [azservicebus, azeventhubs] Stress test and logging improvement (#20710) Logging improvements: * Updating the logging to print more tracing information (per-link) in prep for the bigger release coming up. * Trimming out some of the verbose logging, seeing if I can get it a bit more reasonable. Stress tests: * Add a timestamp to the log name we generate and also default to append, not overwrite. * Use 0.5 cores, 0.5GB as our baseline. Some pods use more and I'll tune them more later. * update proxy version (#20712) Co-authored-by: Scott Beddall * Return an error when you try to send a message that's too large. (#20721) This now works just like the message batch - you'll get an ErrMessageTooLarge if you attempt to send a message that's too large for the link's configured size. NOTE: there's a patch to `internal/go-amqp/Sender.go` to match what's in go-amqp's main so it returns a programmatically useful error when the message is too large. Fixes #20647 * Changes in test that is failing in pipeline (#20693) * [azservicebus, azeventhubs] Treat 'entity full' as a fatal error (#20722) When the remote entity is full we get a resource-limit-exceeded condition. This isn't something we should keep retrying on and it's best to just abort and let the user know immediately, rather than hoping it might eventually clear out. This affected both Event Hubs and Service Bus. Fixes #20647 * [azservicebus/azeventhubs] Redirect stderr and stdout to tee (#20726) * Update changelog with latest features (#20730) * Update changelog with latest features Prepare for upcoming release. * bump minor version * pass along the artifact name so we can override it later (#20732) Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com> * [azeventhubs] Fixing checkpoint store race condition (#20727) The checkpoint store wasn't guarding against multiple owners claiming for the first time - fixing this by using IfNoneMatch Fixes #20717 * Fix azidentity troubleshooting guide link (#20736) * [Release] sdk/resourcemanager/paloaltonetworksngfw/armpanngfw/0.1.0 (#20437) * [Release] sdk/resourcemanager/paloaltonetworksngfw/armpanngfw/0.1.0 generation from spec commit: 85fb4ac6f8bfefd179e6c2632976a154b5c9ff04 * client factory * fix * fix * update * add sdk/resourcemanager/postgresql/armpostgresql live test (#20685) * add sdk/resourcemanager/postgresql/armpostgresql live test * update assets.json * set subscriptionId default value * format * add sdk/resourcemanager/eventhub/armeventhub live test (#20686) * add sdk/resourcemanager/eventhub/armeventhub live test * update assets * add sdk/resourcemanager/compute/armcompute live test (#20048) * add sdk/resourcemanager/compute/armcompute live test * skus filter * fix subscriptionId default value * fix * gofmt * update recording * sdk/resourcemanager/network/armnetwork live test (#20331) * sdk/resourcemanager/network/armnetwork live test * update subscriptionId default value * update recording * add sdk/resourcemanager/cosmos/armcosmos live test (#20705) * add sdk/resourcemanager/cosmos/armcosmos live test * update assets.json * update assets.json * update assets.json * update assets.json * Increment package version after release of azcore (#20740) * [azeventhubs] Improperly resetting etag in the checkpoint store (#20737) We shouldn't be resetting the etag to nil - it's what we use to enforce a "single winner" when doing ownership claims. The bug here was two-fold: I had bad logic in my previous claim ownership, which I fixed in a previous PR, but we need to reflect that same constraint properly in our in-memory checkpoint store for these tests. * Eng workflows sync and branch cleanup additions (#20743) Co-authored-by: James Suplizio * [azeventhubs] Latest start position can also be inclusive (ie, get the latest message) (#20744) * Update GitHubEventProcessor version and remove pull_request_review procesing (#20751) Co-authored-by: James Suplizio * Rename DisableAuthorityValidationAndInstanceDiscovery (#20746) * fix (#20707) * AzFile (#20739) * azfile: Fixing connection string parsing logic (#20798) * Fixing connection string parse logic * Update README * [azadmin] fix flaky test (#20758) * fix flaky test * charles suggestion * Prepare azidentity v1.3.0 for release (#20756) * Fix broken podman link (#20801) Co-authored-by: Wes Haggard * [azquery] update doc comments (#20755) * update doc comments * update statistics and visualization generation * prep-for-release * Fixed contribution section (#20752) Co-authored-by: Bob Tabor * [azeventhubs,azservicebus] Some API cleanup, renames (#20754) * Adding options to UpdateCheckpoint(), just for future potential expansion * Make Offset an int64, not a *int64 (it's not optional, it'll always come back with ReceivedEvents) * Adding more logging into the checkpoint store. * Point all imports at the production go-amqp * Add supporting features to enable distributed tracing (#20301) (#20708) * Add supporting features to enable distributed tracing This includes new internal pipeline policies and other supporting types. See the changelog for a full description. Added some missing doc comments. * fix linter issue * add net.peer.name trace attribute sequence custom HTTP header policy before logging policy. sequence logging policy after HTTP trace policy. keep body download policy at the end. * add span for iterating over pages * Restore ARM CAE support for azcore beta (#20657) This reverts commit 902097226ff3fe2fc6c3e7fc50d3478350253614. * Upgrade to stable azcore (#20808) * Increment package version after release of data/azcosmos (#20807) * Updating changelog (#20810) * Add fake package to azcore (#20711) * Add fake package to azcore This is the supporting infrastructure for the generated SDK fakes. * fix doc comment * Updating CHANGELOG.md (#20809) * changelog (#20811) * Increment package version after release of storage/azfile (#20813) * Update changelog (azblob) (#20815) * Updating CHANGELOG.md * Update the changelog with correct version * [azquery] migration guide (#20742) * migration guide * Charles feedback * Richard feedback --------- Co-authored-by: Charles Lowell <10964656+chlowell@users.noreply.github.com> * Increment package version after release of monitor/azquery (#20820) * [keyvault] prep for release (#20819) * prep for release * perf tests * update date * added new client methods * moved access conditions and wrote first test * added more tests * added path listing * list deleted paths * fixed formatter and added more tests * added more tests * marker fix * log * fixed constructor validation * comment * assets update: * some cleanup * missing models and log * cleanup, resolved apiview feedback * test * cleanup * added service tests * implemented listing * cleanup * cleanup --------- Co-authored-by: Joel Hendrix Co-authored-by: Matias Quaranta Co-authored-by: Richard Park <51494936+richardpark-msft@users.noreply.github.com> Co-authored-by: Azure SDK Bot <53356347+azure-sdk@users.noreply.github.com> Co-authored-by: Scott Beddall Co-authored-by: siminsavani-msft <77068571+siminsavani-msft@users.noreply.github.com> Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com> Co-authored-by: Charles Lowell <10964656+chlowell@users.noreply.github.com> Co-authored-by: Peng Jiahui <46921893+Alancere@users.noreply.github.com> Co-authored-by: James Suplizio Co-authored-by: Sourav Gupta <98318303+souravgupta-msft@users.noreply.github.com> Co-authored-by: gracewilcox <43627800+gracewilcox@users.noreply.github.com> Co-authored-by: Wes Haggard Co-authored-by: Bob Tabor Co-authored-by: Bob Tabor --- sdk/storage/azdatalake/directory/client.go | 19 +- sdk/storage/azdatalake/file/client.go | 20 +- sdk/storage/azdatalake/filesystem/client.go | 23 +- .../azdatalake/filesystem/constants.go | 2 + sdk/storage/azdatalake/filesystem/models.go | 6 + .../azdatalake/filesystem/responses.go | 19 +- sdk/storage/azdatalake/go.mod | 2 +- sdk/storage/azdatalake/go.sum | 4 +- .../azdatalake/internal/base/clients.go | 22 +- .../internal/exported/log_events.go | 20 + .../exported/shared_key_credential.go | 10 +- .../internal/shared/challenge_policy.go | 113 --- .../azdatalake/internal/shared/shared.go | 6 + .../internal/testcommon/clients_auth.go | 53 ++ sdk/storage/azdatalake/log.go | 19 + sdk/storage/azdatalake/sas/service.go | 58 +- sdk/storage/azdatalake/service/client.go | 102 ++- sdk/storage/azdatalake/service/client_test.go | 751 ++++++++++++++++++ sdk/storage/azdatalake/service/models.go | 104 ++- sdk/storage/azdatalake/service/responses.go | 98 ++- 20 files changed, 1234 insertions(+), 217 deletions(-) create mode 100644 sdk/storage/azdatalake/internal/exported/log_events.go delete mode 100644 sdk/storage/azdatalake/internal/shared/challenge_policy.go create mode 100644 sdk/storage/azdatalake/log.go create mode 100644 sdk/storage/azdatalake/service/client_test.go diff --git a/sdk/storage/azdatalake/directory/client.go b/sdk/storage/azdatalake/directory/client.go index 1cdb330d3d0e..749dfbc86724 100644 --- a/sdk/storage/azdatalake/directory/client.go +++ b/sdk/storage/azdatalake/directory/client.go @@ -16,7 +16,6 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" - "strings" ) // ClientOptions contains the optional parameters when creating a Client. @@ -30,10 +29,9 @@ type Client base.CompositeClient[generated.PathClient, generated.PathClient, blo // - cred - an Azure AD credential, typically obtained via the azidentity module // - options - client options; pass nil to accept the default values func NewClient(directoryURL string, cred azcore.TokenCredential, options *ClientOptions) (*Client, error) { - blobURL := strings.Replace(directoryURL, ".dfs.", ".blob.", 1) - directoryURL = strings.Replace(directoryURL, ".blob.", ".dfs.", 1) + blobURL, directoryURL := shared.GetURLs(directoryURL) - authPolicy := shared.NewStorageChallengePolicy(cred) + authPolicy := runtime.NewBearerTokenPolicy(cred, []string{shared.TokenScope}, nil) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{ PerRetry: []policy.Policy{authPolicy}, @@ -62,8 +60,7 @@ func NewClient(directoryURL string, cred azcore.TokenCredential, options *Client // - directoryURL - the URL of the storage account e.g. https://.dfs.core.windows.net/fs/dir? // - options - client options; pass nil to accept the default values func NewClientWithNoCredential(directoryURL string, options *ClientOptions) (*Client, error) { - blobURL := strings.Replace(directoryURL, ".dfs.", ".blob.", 1) - directoryURL = strings.Replace(directoryURL, ".blob.", ".dfs.", 1) + blobURL, directoryURL := shared.GetURLs(directoryURL) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{} @@ -91,8 +88,7 @@ func NewClientWithNoCredential(directoryURL string, options *ClientOptions) (*Cl // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values func NewClientWithSharedKeyCredential(directoryURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { - blobURL := strings.Replace(directoryURL, ".dfs.", ".blob.", 1) - directoryURL = strings.Replace(directoryURL, ".blob.", ".dfs.", 1) + blobURL, directoryURL := shared.GetURLs(directoryURL) authPolicy := exported.NewSharedKeyCredPolicy(cred) conOptions := shared.GetClientOptions(options) @@ -112,8 +108,11 @@ func NewClientWithSharedKeyCredential(directoryURL string, cred *SharedKeyCreden blobClientOpts := blob.ClientOptions{ ClientOptions: options.ClientOptions, } - blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) - blobClient, _ := blob.NewClientWithSharedKeyCredential(blobURL, blobSharedKeyCredential, &blobClientOpts) + blobSharedKey, err := cred.ConvertToBlobSharedKey() + if err != nil { + return nil, err + } + blobClient, _ := blob.NewClientWithSharedKeyCredential(blobURL, blobSharedKey, &blobClientOpts) dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) return (*Client)(dirClient), nil diff --git a/sdk/storage/azdatalake/file/client.go b/sdk/storage/azdatalake/file/client.go index f6fd6f09e6b3..c7355f3e0916 100644 --- a/sdk/storage/azdatalake/file/client.go +++ b/sdk/storage/azdatalake/file/client.go @@ -16,7 +16,6 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" - "strings" ) // ClientOptions contains the optional parameters when creating a Client. @@ -30,10 +29,8 @@ type Client base.CompositeClient[generated.PathClient, generated.PathClient, blo // - cred - an Azure AD credential, typically obtained via the azidentity module // - options - client options; pass nil to accept the default values func NewClient(fileURL string, cred azcore.TokenCredential, options *ClientOptions) (*Client, error) { - blobURL := strings.Replace(fileURL, ".dfs.", ".blob.", 1) - fileURL = strings.Replace(fileURL, ".blob.", ".dfs.", 1) - - authPolicy := shared.NewStorageChallengePolicy(cred) + blobURL, fileURL := shared.GetURLs(fileURL) + authPolicy := runtime.NewBearerTokenPolicy(cred, []string{shared.TokenScope}, nil) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{ PerRetry: []policy.Policy{authPolicy}, @@ -62,8 +59,7 @@ func NewClient(fileURL string, cred azcore.TokenCredential, options *ClientOptio // - fileURL - the URL of the storage account e.g. https://.dfs.core.windows.net/fs/file.txt? // - options - client options; pass nil to accept the default values func NewClientWithNoCredential(fileURL string, options *ClientOptions) (*Client, error) { - blobURL := strings.Replace(fileURL, ".dfs.", ".blob.", 1) - fileURL = strings.Replace(fileURL, ".blob.", ".dfs.", 1) + blobURL, fileURL := shared.GetURLs(fileURL) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{} @@ -91,8 +87,7 @@ func NewClientWithNoCredential(fileURL string, options *ClientOptions) (*Client, // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values func NewClientWithSharedKeyCredential(fileURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { - blobURL := strings.Replace(fileURL, ".dfs.", ".blob.", 1) - fileURL = strings.Replace(fileURL, ".blob.", ".dfs.", 1) + blobURL, fileURL := shared.GetURLs(fileURL) authPolicy := exported.NewSharedKeyCredPolicy(cred) conOptions := shared.GetClientOptions(options) @@ -112,8 +107,11 @@ func NewClientWithSharedKeyCredential(fileURL string, cred *SharedKeyCredential, blobClientOpts := blob.ClientOptions{ ClientOptions: options.ClientOptions, } - blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) - blobClient, _ := blob.NewClientWithSharedKeyCredential(blobURL, blobSharedKeyCredential, &blobClientOpts) + blobSharedKey, err := cred.ConvertToBlobSharedKey() + if err != nil { + return nil, err + } + blobClient, _ := blob.NewClientWithSharedKeyCredential(blobURL, blobSharedKey, &blobClientOpts) fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) return (*Client)(fileClient), nil diff --git a/sdk/storage/azdatalake/filesystem/client.go b/sdk/storage/azdatalake/filesystem/client.go index 151ca389efea..0f182909cb69 100644 --- a/sdk/storage/azdatalake/filesystem/client.go +++ b/sdk/storage/azdatalake/filesystem/client.go @@ -11,7 +11,6 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/datalakeerror" @@ -21,7 +20,6 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" "net/http" - "strings" "time" ) @@ -36,10 +34,8 @@ type Client base.CompositeClient[generated.FileSystemClient, generated.FileSyste // - cred - an Azure AD credential, typically obtained via the azidentity module // - options - client options; pass nil to accept the default values func NewClient(filesystemURL string, cred azcore.TokenCredential, options *ClientOptions) (*Client, error) { - containerURL := strings.Replace(filesystemURL, ".dfs.", ".blob.", 1) - filesystemURL = strings.Replace(filesystemURL, ".blob.", ".dfs.", 1) - - authPolicy := shared.NewStorageChallengePolicy(cred) + containerURL, filesystemURL := shared.GetURLs(filesystemURL) + authPolicy := runtime.NewBearerTokenPolicy(cred, []string{shared.TokenScope}, nil) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{ PerRetry: []policy.Policy{authPolicy}, @@ -68,9 +64,7 @@ func NewClient(filesystemURL string, cred azcore.TokenCredential, options *Clien // - filesystemURL - the URL of the storage account e.g. https://.dfs.core.windows.net/fs? // - options - client options; pass nil to accept the default values func NewClientWithNoCredential(filesystemURL string, options *ClientOptions) (*Client, error) { - containerURL := strings.Replace(filesystemURL, ".dfs.", ".blob.", 1) - filesystemURL = strings.Replace(filesystemURL, ".blob.", ".dfs.", 1) - + containerURL, filesystemURL := shared.GetURLs(filesystemURL) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{} base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) @@ -97,9 +91,7 @@ func NewClientWithNoCredential(filesystemURL string, options *ClientOptions) (*C // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values func NewClientWithSharedKeyCredential(filesystemURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { - containerURL := strings.Replace(filesystemURL, ".dfs.", ".blob.", 1) - filesystemURL = strings.Replace(filesystemURL, ".blob.", ".dfs.", 1) - + containerURL, filesystemURL := shared.GetURLs(filesystemURL) authPolicy := exported.NewSharedKeyCredPolicy(cred) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{ @@ -118,8 +110,11 @@ func NewClientWithSharedKeyCredential(filesystemURL string, cred *SharedKeyCrede containerClientOpts := container.ClientOptions{ ClientOptions: options.ClientOptions, } - blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) - blobContainerClient, _ := container.NewClientWithSharedKeyCredential(containerURL, blobSharedKeyCredential, &containerClientOpts) + blobSharedKey, err := cred.ConvertToBlobSharedKey() + if err != nil { + return nil, err + } + blobContainerClient, _ := container.NewClientWithSharedKeyCredential(containerURL, blobSharedKey, &containerClientOpts) fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, cred, (*base.ClientOptions)(conOptions)) return (*Client)(fsClient), nil diff --git a/sdk/storage/azdatalake/filesystem/constants.go b/sdk/storage/azdatalake/filesystem/constants.go index 3f862296578a..f7ff23ec01cd 100644 --- a/sdk/storage/azdatalake/filesystem/constants.go +++ b/sdk/storage/azdatalake/filesystem/constants.go @@ -17,6 +17,8 @@ const ( Filesystem PublicAccessType = azblob.PublicAccessTypeContainer ) +// TODO: figure out a way to import this from datalake rather than blob again + // StatusType defines values for StatusType type StatusType = lease.StatusType diff --git a/sdk/storage/azdatalake/filesystem/models.go b/sdk/storage/azdatalake/filesystem/models.go index 4da5f91d387e..c36a5acf1eb8 100644 --- a/sdk/storage/azdatalake/filesystem/models.go +++ b/sdk/storage/azdatalake/filesystem/models.go @@ -210,3 +210,9 @@ type LeaseAccessConditions = exported.LeaseAccessConditions // ModifiedAccessConditions contains a group of parameters for specifying access conditions. type ModifiedAccessConditions = exported.ModifiedAccessConditions + +// PathList contains the path list +type PathList = generated.PathList + +// Path contains the path properties +type Path = generated.Path diff --git a/sdk/storage/azdatalake/filesystem/responses.go b/sdk/storage/azdatalake/filesystem/responses.go index e0d4c79b9533..9a2112657bcc 100644 --- a/sdk/storage/azdatalake/filesystem/responses.go +++ b/sdk/storage/azdatalake/filesystem/responses.go @@ -28,28 +28,28 @@ type SetAccessPolicyResponse = container.SetAccessPolicyResponse // GetAccessPolicyResponse contains the response from method FilesystemClient.GetAccessPolicy. type GetAccessPolicyResponse struct { // PublicAccess contains the information returned from the x-ms-blob-public-access header response. - PublicAccess *PublicAccessType `xml:"BlobPublicAccess"` + PublicAccess *PublicAccessType // ClientRequestID contains the information returned from the x-ms-client-request-id header response. - ClientRequestID *string `xml:"ClientRequestID"` + ClientRequestID *string // Date contains the information returned from the Date header response. - Date *time.Time `xml:"Date"` + Date *time.Time // ETag contains the information returned from the ETag header response. - ETag *azcore.ETag `xml:"ETag"` + ETag *azcore.ETag // LastModified contains the information returned from the Last-Modified header response. - LastModified *time.Time `xml:"LastModified"` + LastModified *time.Time // RequestID contains the information returned from the x-ms-request-id header response. - RequestID *string `xml:"RequestID"` + RequestID *string // a collection of signed identifiers - SignedIdentifiers []*SignedIdentifier `xml:"SignedIdentifier"` + SignedIdentifiers []*SignedIdentifier // Version contains the information returned from the x-ms-version header response. - Version *string `xml:"Version"` + Version *string } // since we want to remove the blob prefix in access type @@ -140,3 +140,6 @@ type ListPathsSegmentResponse = generated.FileSystemClientListPathsResponse // ListDeletedPathsSegmentResponse contains the response from method FilesystemClient.ListPathsSegment. type ListDeletedPathsSegmentResponse = generated.FileSystemClientListBlobHierarchySegmentResponse + +// ListBlobsHierarchySegmentResponse contains the response from method FilesystemClient.ListBlobsHierarchySegment. +type ListBlobsHierarchySegmentResponse = generated.ListBlobsHierarchySegmentResponse diff --git a/sdk/storage/azdatalake/go.mod b/sdk/storage/azdatalake/go.mod index 815ae6a72795..b1b48ee155fd 100644 --- a/sdk/storage/azdatalake/go.mod +++ b/sdk/storage/azdatalake/go.mod @@ -3,7 +3,7 @@ module github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake go 1.18 require ( - github.com/Azure/azure-sdk-for-go/sdk/azcore v1.5.0 + github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.1 github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0 github.com/stretchr/testify v1.7.1 diff --git a/sdk/storage/azdatalake/go.sum b/sdk/storage/azdatalake/go.sum index 7999cbb85ae5..911682659b2b 100644 --- a/sdk/storage/azdatalake/go.sum +++ b/sdk/storage/azdatalake/go.sum @@ -1,5 +1,5 @@ -github.com/Azure/azure-sdk-for-go/sdk/azcore v1.5.0 h1:xGLAFFd9D3iLGxYiUGPdITSzsFmU1K8VtfuUHWAoN7M= -github.com/Azure/azure-sdk-for-go/sdk/azcore v1.5.0/go.mod h1:bjGvMhVMb+EEm3VRNQawDMUyMMjo+S5ewNjflkep/0Q= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.1 h1:SEy2xmstIphdPwNBUi7uhvjyjhVKISfwjfOJmuy7kg4= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.1/go.mod h1:bjGvMhVMb+EEm3VRNQawDMUyMMjo+S5ewNjflkep/0Q= github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0 h1:QkAcEIAKbNL4KoFr4SathZPhDhF4mVwpBMFlYjyAqy8= github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 h1:sXr+ck84g/ZlZUOZiNELInmMgOsuGwdjjVkEIde0OtY= github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM= diff --git a/sdk/storage/azdatalake/internal/base/clients.go b/sdk/storage/azdatalake/internal/base/clients.go index e526f500a25e..3ebb6fc7ee69 100644 --- a/sdk/storage/azdatalake/internal/base/clients.go +++ b/sdk/storage/azdatalake/internal/base/clients.go @@ -22,24 +22,6 @@ type ClientOptions struct { pipelineOptions *runtime.PipelineOptions } -type Client[T any] struct { - inner *T - sharedKey *exported.SharedKeyCredential - options *ClientOptions -} - -func InnerClient[T any](client *Client[T]) *T { - return client.inner -} - -func SharedKey[T any](client *Client[T]) *exported.SharedKeyCredential { - return client.sharedKey -} - -func GetClientOptions[T any](client *Client[T]) *ClientOptions { - return client.options -} - func GetPipelineOptions(clOpts *ClientOptions) *runtime.PipelineOptions { return clOpts.pipelineOptions } @@ -96,3 +78,7 @@ func NewPathClient(dirURL string, dirURLWithBlobEndpoint string, client *blob.Cl options: options, } } + +func GetCompositeClientOptions[T, K, U any](client *CompositeClient[T, K, U]) *ClientOptions { + return client.options +} diff --git a/sdk/storage/azdatalake/internal/exported/log_events.go b/sdk/storage/azdatalake/internal/exported/log_events.go new file mode 100644 index 000000000000..4a9c3cafd628 --- /dev/null +++ b/sdk/storage/azdatalake/internal/exported/log_events.go @@ -0,0 +1,20 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. + +package exported + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +// NOTE: these are publicly exported via type-aliasing in azdatalake/log.go +const ( + // EventUpload is used when we compute number of chunks to upload and size of each chunk. + EventUpload log.Event = "azdatalake.Upload" + + // EventError is used for logging errors. + EventError log.Event = "azdatalake.Error" +) diff --git a/sdk/storage/azdatalake/internal/exported/shared_key_credential.go b/sdk/storage/azdatalake/internal/exported/shared_key_credential.go index 980b5a9a4f5a..63539ea0b10a 100644 --- a/sdk/storage/azdatalake/internal/exported/shared_key_credential.go +++ b/sdk/storage/azdatalake/internal/exported/shared_key_credential.go @@ -12,6 +12,7 @@ import ( "crypto/sha256" "encoding/base64" "fmt" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob" "net/http" "net/url" "sort" @@ -48,9 +49,12 @@ func (c *SharedKeyCredential) AccountName() string { return c.accountName } -// AccountKey returns the Storage account's name. -func (c *SharedKeyCredential) AccountKey() string { - return c.accountKeyString +func (c *SharedKeyCredential) ConvertToBlobSharedKey() (*azblob.SharedKeyCredential, error) { + cred, err := azblob.NewSharedKeyCredential(c.accountName, c.accountKeyString) + if err != nil { + return nil, err + } + return cred, nil } // SetAccountKey replaces the existing account key with the specified account key. diff --git a/sdk/storage/azdatalake/internal/shared/challenge_policy.go b/sdk/storage/azdatalake/internal/shared/challenge_policy.go deleted file mode 100644 index e7c8e9213d80..000000000000 --- a/sdk/storage/azdatalake/internal/shared/challenge_policy.go +++ /dev/null @@ -1,113 +0,0 @@ -//go:build go1.18 -// +build go1.18 - -// Copyright (c) Microsoft Corporation. All rights reserved. -// Licensed under the MIT License. See License.txt in the project root for license information. - -package shared - -import ( - "errors" - "github.com/Azure/azure-sdk-for-go/sdk/azcore" - "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" - "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" - "net/http" - "strings" -) - -type storageAuthorizer struct { - scopes []string - tenantID string -} - -func NewStorageChallengePolicy(cred azcore.TokenCredential) policy.Policy { - s := storageAuthorizer{scopes: []string{TokenScope}} - return runtime.NewBearerTokenPolicy(cred, []string{TokenScope}, &policy.BearerTokenOptions{ - AuthorizationHandler: policy.AuthorizationHandler{ - OnRequest: s.onRequest, - OnChallenge: s.onChallenge, - }, - }) -} - -func (s *storageAuthorizer) onRequest(req *policy.Request, authNZ func(policy.TokenRequestOptions) error) error { - return authNZ(policy.TokenRequestOptions{Scopes: s.scopes}) -} - -func (s *storageAuthorizer) onChallenge(req *policy.Request, resp *http.Response, authNZ func(policy.TokenRequestOptions) error) error { - // parse the challenge - err := s.parseChallenge(resp) - if err != nil { - return err - } - // TODO: Set tenantID when policy.TokenRequestOptions supports it. https://github.com/Azure/azure-sdk-for-go/issues/19841 - return authNZ(policy.TokenRequestOptions{Scopes: s.scopes}) -} - -type challengePolicyError struct { - err error -} - -func (c *challengePolicyError) Error() string { - return c.err.Error() -} - -func (*challengePolicyError) NonRetriable() { - // marker method -} - -func (c *challengePolicyError) Unwrap() error { - return c.err -} - -// parses Tenant ID from auth challenge -// https://login.microsoftonline.com/00000000-0000-0000-0000-000000000000/oauth2/authorize -func parseTenant(url string) string { - if url == "" { - return "" - } - parts := strings.Split(url, "/") - if len(parts) >= 3 { - tenant := parts[3] - tenant = strings.ReplaceAll(tenant, ",", "") - return tenant - } else { - return "" - } -} - -func (s *storageAuthorizer) parseChallenge(resp *http.Response) error { - authHeader := resp.Header.Get("WWW-Authenticate") - if authHeader == "" { - return &challengePolicyError{err: errors.New("response has no WWW-Authenticate header for challenge authentication")} - } - - // Strip down to auth and resource - // Format is "Bearer authorization_uri=\"\" resource_id=\"\"" - authHeader = strings.ReplaceAll(authHeader, "Bearer ", "") - - parts := strings.Split(authHeader, " ") - - vals := map[string]string{} - for _, part := range parts { - subParts := strings.Split(part, "=") - if len(subParts) == 2 { - stripped := strings.ReplaceAll(subParts[1], "\"", "") - stripped = strings.TrimSuffix(stripped, ",") - vals[subParts[0]] = stripped - } - } - - s.tenantID = parseTenant(vals["authorization_uri"]) - - scope := vals["resource_id"] - if scope == "" { - return &challengePolicyError{err: errors.New("could not find a valid resource in the WWW-Authenticate header")} - } - - if !strings.HasSuffix(scope, "/.default") { - scope += "/.default" - } - s.scopes = []string{scope} - return nil -} diff --git a/sdk/storage/azdatalake/internal/shared/shared.go b/sdk/storage/azdatalake/internal/shared/shared.go index 7fd977d8b059..0e54e94827df 100644 --- a/sdk/storage/azdatalake/internal/shared/shared.go +++ b/sdk/storage/azdatalake/internal/shared/shared.go @@ -72,6 +72,12 @@ type ParsedConnectionString struct { AccountKey string } +func GetURLs(url string) (string, string) { + blobURL := strings.Replace(url, ".dfs.", ".blob.", 1) + dfsURL := strings.Replace(url, ".blob.", ".dfs.", 1) + return blobURL, dfsURL +} + func ParseConnectionString(connectionString string) (ParsedConnectionString, error) { const ( defaultScheme = "https" diff --git a/sdk/storage/azdatalake/internal/testcommon/clients_auth.go b/sdk/storage/azdatalake/internal/testcommon/clients_auth.go index e65e4914b2c6..5988e217e87e 100644 --- a/sdk/storage/azdatalake/internal/testcommon/clients_auth.go +++ b/sdk/storage/azdatalake/internal/testcommon/clients_auth.go @@ -3,6 +3,7 @@ package testcommon import ( "context" "errors" + "fmt" "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/internal/recording" @@ -104,7 +105,59 @@ func GetFilesystemClient(fsName string, t *testing.T, accountType TestAccountTyp return filesystemClient, err } +func ServiceGetFilesystemClient(filesystemName string, s *service.Client) *filesystem.Client { + return s.NewFilesystemClient(filesystemName) +} + func DeleteFilesystem(ctx context.Context, _require *require.Assertions, filesystemClient *filesystem.Client) { _, err := filesystemClient.Delete(ctx, nil) _require.Nil(err) } + +func GetGenericConnectionString(accountType TestAccountType) (*string, error) { + accountName, accountKey := GetGenericAccountInfo(accountType) + if accountName == "" || accountKey == "" { + return nil, errors.New(string(accountType) + AccountNameEnvVar + " and/or " + string(accountType) + AccountKeyEnvVar + " environment variables not specified.") + } + connectionString := fmt.Sprintf("DefaultEndpointsProtocol=https;AccountName=%s;AccountKey=%s;EndpointSuffix=core.windows.net/", + accountName, accountKey) + return &connectionString, nil +} + +func CreateNewFilesystem(ctx context.Context, _require *require.Assertions, filesystemName string, serviceClient *service.Client) *filesystem.Client { + fsClient := ServiceGetFilesystemClient(filesystemName, serviceClient) + + _, err := fsClient.Create(ctx, nil) + _require.Nil(err) + // _require.Equal(cResp.RawResponse.StatusCode, 201) + return fsClient +} +func GetServiceClientFromConnectionString(t *testing.T, accountType TestAccountType, options *service.ClientOptions) (*service.Client, error) { + if options == nil { + options = &service.ClientOptions{} + } + SetClientOptions(t, &options.ClientOptions) + + transport, err := recording.NewRecordingHTTPClient(t, nil) + require.NoError(t, err) + options.Transport = transport + + cred, err := GetGenericConnectionString(accountType) + if err != nil { + return nil, err + } + svcClient, err := service.NewClientFromConnectionString(*cred, options) + return svcClient, err +} + +func GetServiceClientNoCredential(t *testing.T, sasUrl string, options *service.ClientOptions) (*service.Client, error) { + if options == nil { + options = &service.ClientOptions{} + } + + SetClientOptions(t, &options.ClientOptions) + + serviceClient, err := service.NewClientWithNoCredential(sasUrl, options) + + return serviceClient, err +} diff --git a/sdk/storage/azdatalake/log.go b/sdk/storage/azdatalake/log.go new file mode 100644 index 000000000000..a168fb01f8ba --- /dev/null +++ b/sdk/storage/azdatalake/log.go @@ -0,0 +1,19 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azdatalake + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" +) + +const ( + // EventUpload is used for logging events related to upload operation. + EventUpload = exported.EventUpload + + // EventError is used for logging errors. + EventError = exported.EventError +) diff --git a/sdk/storage/azdatalake/sas/service.go b/sdk/storage/azdatalake/sas/service.go index 23518b25676d..86a292028276 100644 --- a/sdk/storage/azdatalake/sas/service.go +++ b/sdk/storage/azdatalake/sas/service.go @@ -336,16 +336,16 @@ func parseFilesystemPermissions(s string) (FilesystemPermissions, error) { return p, nil } -// PathPermissions type simplifies creating the permissions string for an Azure Storage blob SAS. +// FilePermissions type simplifies creating the permissions string for an Azure Storage blob SAS. // Initialize an instance of this type and then call its String method to set BlobSignatureValues' Permissions field. -type PathPermissions struct { +type FilePermissions struct { Read, Add, Create, Write, Delete, List, Move bool Execute, Ownership, Permissions bool } // String produces the SAS permissions string for an Azure Storage blob. // Call this method to set BlobSignatureValues' Permissions field. -func (p *PathPermissions) String() string { +func (p *FilePermissions) String() string { var b bytes.Buffer if p.Read { b.WriteRune('r') @@ -380,9 +380,53 @@ func (p *PathPermissions) String() string { return b.String() } -// Parse initializes BlobPermissions' fields from a string. -func parsePathPermissions(s string) (PathPermissions, error) { - p := PathPermissions{} // Clear the flags +// DirectoryPermissions type simplifies creating the permissions string for an Azure Storage blob SAS. +// Initialize an instance of this type and then call its String method to set BlobSignatureValues' Permissions field. +type DirectoryPermissions struct { + Read, Add, Create, Write, Delete, List, Move bool + Execute, Ownership, Permissions bool +} + +// String produces the SAS permissions string for an Azure Storage blob. +// Call this method to set BlobSignatureValues' Permissions field. +func (p *DirectoryPermissions) String() string { + var b bytes.Buffer + if p.Read { + b.WriteRune('r') + } + if p.Add { + b.WriteRune('a') + } + if p.Create { + b.WriteRune('c') + } + if p.Write { + b.WriteRune('w') + } + if p.Delete { + b.WriteRune('d') + } + if p.List { + b.WriteRune('l') + } + if p.Move { + b.WriteRune('m') + } + if p.Execute { + b.WriteRune('e') + } + if p.Ownership { + b.WriteRune('o') + } + if p.Permissions { + b.WriteRune('p') + } + return b.String() +} + +// Since this is internal we can just always convert to FilePermissions to avoid some duplication here +func parsePathPermissions(s string) (FilePermissions, error) { + p := FilePermissions{} // Clear the flags for _, r := range s { switch r { case 'r': @@ -406,7 +450,7 @@ func parsePathPermissions(s string) (PathPermissions, error) { case 'p': p.Permissions = true default: - return PathPermissions{}, fmt.Errorf("invalid permission: '%v'", r) + return FilePermissions{}, fmt.Errorf("invalid permission: '%v'", r) } } return p, nil diff --git a/sdk/storage/azdatalake/service/client.go b/sdk/storage/azdatalake/service/client.go index 129bb5533f9a..9b7e8721c8db 100644 --- a/sdk/storage/azdatalake/service/client.go +++ b/sdk/storage/azdatalake/service/client.go @@ -11,14 +11,15 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" - "strings" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" + "time" ) // ClientOptions contains the optional parameters when creating a Client. @@ -32,10 +33,8 @@ type Client base.CompositeClient[generated.ServiceClient, generated.ServiceClien // - cred - an Azure AD credential, typically obtained via the azidentity module // - options - client options; pass nil to accept the default values func NewClient(serviceURL string, cred azcore.TokenCredential, options *ClientOptions) (*Client, error) { - blobServiceURL := strings.Replace(serviceURL, ".dfs.", ".blob.", 1) - datalakeServiceURL := strings.Replace(serviceURL, ".blob.", ".dfs.", 1) - - authPolicy := shared.NewStorageChallengePolicy(cred) + blobServiceURL, datalakeServiceURL := shared.GetURLs(serviceURL) + authPolicy := runtime.NewBearerTokenPolicy(cred, []string{shared.TokenScope}, nil) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{ PerRetry: []policy.Policy{authPolicy}, @@ -63,9 +62,7 @@ func NewClient(serviceURL string, cred azcore.TokenCredential, options *ClientOp // - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/ // - options - client options; pass nil to accept the default values. func NewClientWithNoCredential(serviceURL string, options *ClientOptions) (*Client, error) { - blobServiceURL := strings.Replace(serviceURL, ".dfs.", ".blob.", 1) - datalakeServiceURL := strings.Replace(serviceURL, ".blob.", ".dfs.", 1) - + blobServiceURL, datalakeServiceURL := shared.GetURLs(serviceURL) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{} base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) @@ -92,9 +89,7 @@ func NewClientWithNoCredential(serviceURL string, options *ClientOptions) (*Clie // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { - blobServiceURL := strings.Replace(serviceURL, ".dfs.", ".blob.", 1) - datalakeServiceURL := strings.Replace(serviceURL, ".blob.", ".dfs.", 1) - + blobServiceURL, datalakeServiceURL := shared.GetURLs(serviceURL) authPolicy := exported.NewSharedKeyCredPolicy(cred) conOptions := shared.GetClientOptions(options) plOpts := runtime.PipelineOptions{ @@ -113,8 +108,11 @@ func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredenti blobServiceClientOpts := service.ClientOptions{ ClientOptions: options.ClientOptions, } - blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) - blobSvcClient, _ := service.NewClientWithSharedKeyCredential(blobServiceURL, blobSharedKeyCredential, &blobServiceClientOpts) + blobSharedKey, err := cred.ConvertToBlobSharedKey() + if err != nil { + return nil, err + } + blobSvcClient, _ := service.NewClientWithSharedKeyCredential(blobServiceURL, blobSharedKey, &blobServiceClientOpts) svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, cred, (*base.ClientOptions)(conOptions)) return (*Client)(svcClient), nil @@ -140,23 +138,38 @@ func NewClientFromConnectionString(connectionString string, options *ClientOptio return NewClientWithNoCredential(parsed.ServiceURL, options) } +func (s *Client) getClientOptions() *base.ClientOptions { + return base.GetCompositeClientOptions((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) +} + // NewFilesystemClient creates a new share.Client object by concatenating shareName to the end of this Client's URL. // The new share.Client uses the same request policy pipeline as the Client. func (s *Client) NewFilesystemClient(filesystemName string) *filesystem.Client { - //fsURL := runtime.JoinPaths(s.generatedServiceClientWithDFS().Endpoint(), filesystemName) - //return (*filesystem.Client)(base.NewFilesystemClient(fsURL, s.generated().Pipeline(), s.credential())) - return nil + filesystemURL := runtime.JoinPaths(s.generatedServiceClientWithDFS().Endpoint(), filesystemName) + // TODO: remove new azcore.Client creation after the API for shallow copying with new client name is implemented + clOpts := s.getClientOptions() + azClient, err := azcore.NewClient(shared.FilesystemClient, exported.ModuleVersion, *(base.GetPipelineOptions(clOpts)), &(clOpts.ClientOptions)) + if err != nil { + if log.Should(exported.EventError) { + log.Writef(exported.EventError, err.Error()) + } + return nil + } + filesystemURL, containerURL := shared.GetURLs(filesystemURL) + return (*filesystem.Client)(base.NewFilesystemClient(filesystemURL, containerURL, s.serviceClient().NewContainerClient(filesystemName), azClient, s.sharedKey(), clOpts)) } // NewDirectoryClient creates a new share.Client object by concatenating shareName to the end of this Client's URL. // The new share.Client uses the same request policy pipeline as the Client. func (s *Client) NewDirectoryClient(directoryName string) *filesystem.Client { + // TODO: implement once dir client is implemented return nil } // NewFileClient creates a new share.Client object by concatenating shareName to the end of this Client's URL. // The new share.Client uses the same request policy pipeline as the Client. func (s *Client) NewFileClient(fileName string) *filesystem.Client { + // TODO: implement once file client is implemented return nil } @@ -170,7 +183,7 @@ func (s *Client) generatedServiceClientWithBlob() *generated.ServiceClient { return svcClientWithBlob } -func (s *Client) containerClient() *service.Client { +func (s *Client) serviceClient() *service.Client { _, _, serviceClient := base.InnerClients((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) return serviceClient } @@ -203,18 +216,61 @@ func (s *Client) DeleteFilesystem(ctx context.Context, filesystem string, option return resp, err } -// SetServiceProperties sets properties for a storage account's File service endpoint. (blob3) -func (s *Client) SetServiceProperties(ctx context.Context, options *SetPropertiesOptions) (SetPropertiesResponse, error) { - return SetPropertiesResponse{}, nil +// SetProperties sets properties for a storage account's File service endpoint. (blob3) +func (s *Client) SetProperties(ctx context.Context, options *SetPropertiesOptions) (SetPropertiesResponse, error) { + opts := options.format() + return s.serviceClient().SetProperties(ctx, opts) } // GetProperties gets properties for a storage account's File service endpoint. (blob3) func (s *Client) GetProperties(ctx context.Context, options *GetPropertiesOptions) (GetPropertiesResponse, error) { - return GetPropertiesResponse{}, nil + opts := options.format() + return s.serviceClient().GetProperties(ctx, opts) + } // NewListFilesystemsPager operation returns a pager of the shares under the specified account. (blob3) // For more information, see https://learn.microsoft.com/en-us/rest/api/storageservices/list-shares func (s *Client) NewListFilesystemsPager(options *ListFilesystemsOptions) *runtime.Pager[ListFilesystemsResponse] { - return nil + return runtime.NewPager(runtime.PagingHandler[ListFilesystemsResponse]{ + More: func(page ListFilesystemsResponse) bool { + return page.NextMarker != nil && len(*page.NextMarker) > 0 + }, + Fetcher: func(ctx context.Context, page *ListFilesystemsResponse) (ListFilesystemsResponse, error) { + if page == nil { + page = &ListFilesystemsResponse{} + opts := options.format() + page.blobPager = s.serviceClient().NewListContainersPager(opts) + } + newPage := ListFilesystemsResponse{} + currPage, err := page.blobPager.NextPage(context.TODO()) + if err != nil { + return newPage, err + } + newPage.Prefix = currPage.Prefix + newPage.Marker = currPage.Marker + newPage.MaxResults = currPage.MaxResults + newPage.NextMarker = currPage.NextMarker + newPage.Filesystems = convertContainerItemsToFSItems(currPage.ContainerItems) + newPage.ServiceEndpoint = currPage.ServiceEndpoint + newPage.blobPager = page.blobPager + + return newPage, err + }, + }) +} + +// GetSASURL is a convenience method for generating a SAS token for the currently pointed at account. +// It can only be used if the credential supplied during creation was a SharedKeyCredential. +func (s *Client) GetSASURL(resources sas.AccountResourceTypes, permissions sas.AccountPermissions, expiry time.Time, o *GetSASURLOptions) (string, error) { + // format all options to blob service options + res, perms, opts := o.format(resources, permissions) + return s.serviceClient().GetSASURL(res, perms, expiry, opts) } + +// TODO: Figure out how we can convert from blob delegation key to one defined in datalake +//// GetUserDelegationCredential obtains a UserDelegationKey object using the base ServiceURL object. +//// OAuth is required for this call, as well as any role that can delegate access to the storage account. +//func (s *Client) GetUserDelegationCredential(ctx context.Context, info KeyInfo, o *GetUserDelegationCredentialOptions) (*UserDelegationCredential, error) { +// return s.serviceClient().GetUserDelegationCredential(ctx, info, o) +//} diff --git a/sdk/storage/azdatalake/service/client_test.go b/sdk/storage/azdatalake/service/client_test.go new file mode 100644 index 000000000000..c222bac865d8 --- /dev/null +++ b/sdk/storage/azdatalake/service/client_test.go @@ -0,0 +1,751 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. + +package service_test + +import ( + "context" + "fmt" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" + "github.com/Azure/azure-sdk-for-go/sdk/internal/recording" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/datalakeerror" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/testcommon" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/lease" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/service" + "github.com/stretchr/testify/require" + "github.com/stretchr/testify/suite" + "os" + "testing" + "time" +) + +func Test(t *testing.T) { + recordMode := recording.GetRecordMode() + t.Logf("Running service Tests in %s mode\n", recordMode) + if recordMode == recording.LiveMode { + suite.Run(t, &ServiceRecordedTestsSuite{}) + suite.Run(t, &ServiceUnrecordedTestsSuite{}) + } else if recordMode == recording.PlaybackMode { + suite.Run(t, &ServiceRecordedTestsSuite{}) + } else if recordMode == recording.RecordingMode { + suite.Run(t, &ServiceRecordedTestsSuite{}) + } +} + +func (s *ServiceRecordedTestsSuite) BeforeTest(suite string, test string) { + testcommon.BeforeTest(s.T(), suite, test) +} + +func (s *ServiceRecordedTestsSuite) AfterTest(suite string, test string) { + testcommon.AfterTest(s.T(), suite, test) +} + +func (s *ServiceUnrecordedTestsSuite) BeforeTest(suite string, test string) { + +} + +func (s *ServiceUnrecordedTestsSuite) AfterTest(suite string, test string) { + +} + +type ServiceRecordedTestsSuite struct { + suite.Suite +} + +type ServiceUnrecordedTestsSuite struct { + suite.Suite +} + +func (s *ServiceRecordedTestsSuite) TestServiceClientFromConnectionString() { + _require := require.New(s.T()) + testName := s.T().Name() + + accountName, _ := testcommon.GetGenericAccountInfo(testcommon.TestAccountDatalake) + connectionString, _ := testcommon.GetGenericConnectionString(testcommon.TestAccountDatalake) + + parsedConnStr, err := shared.ParseConnectionString(*connectionString) + _require.Nil(err) + _require.Equal(parsedConnStr.ServiceURL, "https://"+accountName+".blob.core.windows.net/") + + sharedKeyCred, err := azdatalake.NewSharedKeyCredential(parsedConnStr.AccountName, parsedConnStr.AccountKey) + _require.Nil(err) + + svcClient, err := service.NewClientWithSharedKeyCredential(parsedConnStr.ServiceURL, sharedKeyCred, nil) + _require.Nil(err) + fsClient := testcommon.CreateNewFilesystem(context.Background(), _require, testcommon.GenerateFilesystemName(testName), svcClient) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) +} + +func (s *ServiceRecordedTestsSuite) TestSetPropertiesLogging() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDefault, nil) + _require.NoError(err) + + days := to.Ptr[int32](5) + enabled := to.Ptr(true) + + loggingOpts := service.Logging{ + Read: enabled, Write: enabled, Delete: enabled, + RetentionPolicy: &service.RetentionPolicy{Enabled: enabled, Days: days}} + opts := service.SetPropertiesOptions{Logging: &loggingOpts} + _, err = svcClient.SetProperties(context.Background(), &opts) + + _require.Nil(err) + resp1, err := svcClient.GetProperties(context.Background(), nil) + + _require.Nil(err) + _require.Equal(resp1.Logging.Write, enabled) + _require.Equal(resp1.Logging.Read, enabled) + _require.Equal(resp1.Logging.Delete, enabled) + _require.Equal(resp1.Logging.RetentionPolicy.Days, days) + _require.Equal(resp1.Logging.RetentionPolicy.Enabled, enabled) +} + +func (s *ServiceRecordedTestsSuite) TestSetPropertiesHourMetrics() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + days := to.Ptr[int32](5) + enabled := to.Ptr(true) + + metricsOpts := service.Metrics{ + Enabled: enabled, IncludeAPIs: enabled, RetentionPolicy: &service.RetentionPolicy{Enabled: enabled, Days: days}} + opts := service.SetPropertiesOptions{HourMetrics: &metricsOpts} + _, err = svcClient.SetProperties(context.Background(), &opts) + + _require.Nil(err) + resp1, err := svcClient.GetProperties(context.Background(), nil) + + _require.Nil(err) + _require.Equal(resp1.HourMetrics.Enabled, enabled) + _require.Equal(resp1.HourMetrics.IncludeAPIs, enabled) + _require.Equal(resp1.HourMetrics.RetentionPolicy.Days, days) + _require.Equal(resp1.HourMetrics.RetentionPolicy.Enabled, enabled) +} + +func (s *ServiceRecordedTestsSuite) TestSetPropertiesMinuteMetrics() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + days := to.Ptr[int32](5) + enabled := to.Ptr(true) + + metricsOpts := service.Metrics{ + Enabled: enabled, IncludeAPIs: enabled, RetentionPolicy: &service.RetentionPolicy{Enabled: enabled, Days: days}} + opts := service.SetPropertiesOptions{MinuteMetrics: &metricsOpts} + _, err = svcClient.SetProperties(context.Background(), &opts) + + _require.Nil(err) + resp1, err := svcClient.GetProperties(context.Background(), nil) + + _require.Nil(err) + _require.Equal(resp1.MinuteMetrics.Enabled, enabled) + _require.Equal(resp1.MinuteMetrics.IncludeAPIs, enabled) + _require.Equal(resp1.MinuteMetrics.RetentionPolicy.Days, days) + _require.Equal(resp1.MinuteMetrics.RetentionPolicy.Enabled, enabled) +} + +func (s *ServiceRecordedTestsSuite) TestSetPropertiesSetCORSMultiple() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + defaultAge := to.Ptr[int32](500) + defaultStr := to.Ptr[string]("") + + allowedOrigins1 := "www.xyz.com" + allowedMethods1 := "GET" + CORSOpts1 := &service.CORSRule{AllowedOrigins: &allowedOrigins1, AllowedMethods: &allowedMethods1} + + allowedOrigins2 := "www.xyz.com,www.ab.com,www.bc.com" + allowedMethods2 := "GET, PUT" + maxAge2 := to.Ptr[int32](500) + exposedHeaders2 := "x-ms-meta-data*,x-ms-meta-source*,x-ms-meta-abc,x-ms-meta-bcd" + allowedHeaders2 := "x-ms-meta-data*,x-ms-meta-target*,x-ms-meta-xyz,x-ms-meta-foo" + + CORSOpts2 := &service.CORSRule{ + AllowedOrigins: &allowedOrigins2, AllowedMethods: &allowedMethods2, + MaxAgeInSeconds: maxAge2, ExposedHeaders: &exposedHeaders2, AllowedHeaders: &allowedHeaders2} + + CORSRules := []*service.CORSRule{CORSOpts1, CORSOpts2} + + opts := service.SetPropertiesOptions{CORS: CORSRules} + _, err = svcClient.SetProperties(context.Background(), &opts) + + _require.Nil(err) + resp, err := svcClient.GetProperties(context.Background(), nil) + for i := 0; i < len(resp.CORS); i++ { + if resp.CORS[i].AllowedOrigins == &allowedOrigins1 { + _require.Equal(resp.CORS[i].AllowedMethods, &allowedMethods1) + _require.Equal(resp.CORS[i].MaxAgeInSeconds, defaultAge) + _require.Equal(resp.CORS[i].ExposedHeaders, defaultStr) + _require.Equal(resp.CORS[i].AllowedHeaders, defaultStr) + + } else if resp.CORS[i].AllowedOrigins == &allowedOrigins2 { + _require.Equal(resp.CORS[i].AllowedMethods, &allowedMethods2) + _require.Equal(resp.CORS[i].MaxAgeInSeconds, &maxAge2) + _require.Equal(resp.CORS[i].ExposedHeaders, &exposedHeaders2) + _require.Equal(resp.CORS[i].AllowedHeaders, &allowedHeaders2) + } + } + _require.Nil(err) +} + +func (s *ServiceRecordedTestsSuite) TestAccountDeleteRetentionPolicy() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + days := to.Ptr[int32](5) + enabled := to.Ptr(true) + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{Enabled: enabled, Days: days}}) + _require.Nil(err) + + // From FE, 30 seconds is guaranteed to be enough. + time.Sleep(time.Second * 30) + + resp, err := svcClient.GetProperties(context.Background(), nil) + _require.Nil(err) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Enabled, *enabled) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Days, *days) + + disabled := false + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{Enabled: &disabled}}) + _require.Nil(err) + + // From FE, 30 seconds is guaranteed to be enough. + time.Sleep(time.Second * 30) + + resp, err = svcClient.GetProperties(context.Background(), nil) + _require.Nil(err) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Enabled, false) + _require.Nil(resp.StorageServiceProperties.DeleteRetentionPolicy.Days) +} + +func (s *ServiceRecordedTestsSuite) TestAccountDeleteRetentionPolicyEmpty() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + days := to.Ptr[int32](5) + enabled := to.Ptr(true) + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{Enabled: enabled, Days: days}}) + _require.Nil(err) + + // From FE, 30 seconds is guaranteed to be enough. + time.Sleep(time.Second * 30) + + resp, err := svcClient.GetProperties(context.Background(), nil) + _require.Nil(err) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Enabled, *enabled) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Days, *days) + + // Empty retention policy causes an error, this is different from track 1.5 + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{}}) + _require.NotNil(err) +} + +func (s *ServiceRecordedTestsSuite) TestAccountDeleteRetentionPolicyNil() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + days := to.Ptr[int32](5) + enabled := to.Ptr(true) + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{Enabled: enabled, Days: days}}) + _require.Nil(err) + + // From FE, 30 seconds is guaranteed to be enough. + time.Sleep(time.Second * 30) + + resp, err := svcClient.GetProperties(context.Background(), nil) + _require.Nil(err) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Enabled, *enabled) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Days, *days) + + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{}) + _require.Nil(err) + + // From FE, 30 seconds is guaranteed to be enough. + time.Sleep(time.Second * 30) + + // If an element of service properties is not passed, the service keeps the current settings. + resp, err = svcClient.GetProperties(context.Background(), nil) + _require.Nil(err) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Enabled, *enabled) + _require.EqualValues(*resp.StorageServiceProperties.DeleteRetentionPolicy.Days, *days) + + // Disable for other tests + enabled = to.Ptr(false) + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{Enabled: enabled}}) + _require.Nil(err) +} + +func (s *ServiceRecordedTestsSuite) TestAccountDeleteRetentionPolicyDaysTooLarge() { + _require := require.New(s.T()) + var svcClient *service.Client + var err error + for i := 1; i <= 2; i++ { + if i == 1 { + svcClient, err = testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + } else { + svcClient, err = testcommon.GetServiceClientFromConnectionString(s.T(), testcommon.TestAccountDatalake, nil) + } + _require.Nil(err) + + days := int32(366) // Max days is 365. Left to the service for validation. + enabled := true + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{Enabled: &enabled, Days: &days}}) + _require.NotNil(err) + + testcommon.ValidateBlobErrorCode(_require, err, datalakeerror.InvalidXMLDocument) + } +} + +func (s *ServiceRecordedTestsSuite) TestAccountDeleteRetentionPolicyDaysOmitted() { + _require := require.New(s.T()) + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.NoError(err) + + // Days is required if enabled is true. + enabled := true + _, err = svcClient.SetProperties(context.Background(), &service.SetPropertiesOptions{DeleteRetentionPolicy: &service.RetentionPolicy{Enabled: &enabled}}) + _require.NotNil(err) + + testcommon.ValidateBlobErrorCode(_require, err, datalakeerror.InvalidXMLDocument) +} + +func (s *ServiceRecordedTestsSuite) TestSASServiceClient() { + _require := require.New(s.T()) + testName := s.T().Name() + cred, _ := testcommon.GetGenericSharedKeyCredential(testcommon.TestAccountDatalake) + + serviceClient, err := service.NewClientWithSharedKeyCredential(fmt.Sprintf("https://%s.dfs.core.windows.net/", cred.AccountName()), cred, nil) + _require.Nil(err) + + fsName := testcommon.GenerateFilesystemName(testName) + + // Note: Always set all permissions, services, types to true to ensure order of string formed is correct. + resources := sas.AccountResourceTypes{ + Object: true, + Service: true, + Container: true, + } + permissions := sas.AccountPermissions{ + Read: true, + Write: true, + Delete: true, + DeletePreviousVersion: true, + List: true, + Add: true, + Create: true, + Update: true, + Process: true, + Tag: true, + FilterByTags: true, + PermanentDelete: true, + } + expiry := time.Now().Add(time.Hour) + sasUrl, err := serviceClient.GetSASURL(resources, permissions, expiry, nil) + _require.Nil(err) + + svcClient, err := testcommon.GetServiceClientNoCredential(s.T(), sasUrl, nil) + _require.Nil(err) + + // create fs using SAS + _, err = svcClient.CreateFilesystem(context.Background(), fsName, nil) + _require.Nil(err) + + _, err = svcClient.DeleteFilesystem(context.Background(), fsName, nil) + _require.Nil(err) +} + +func (s *ServiceRecordedTestsSuite) TestSASServiceClientNoKey() { + _require := require.New(s.T()) + accountName := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME") + + serviceClient, err := service.NewClientWithNoCredential(fmt.Sprintf("https://%s.blob.core.windows.net/", accountName), nil) + _require.Nil(err) + resources := sas.AccountResourceTypes{ + Object: true, + Service: true, + Container: true, + } + permissions := sas.AccountPermissions{ + Read: true, + Write: true, + Delete: true, + DeletePreviousVersion: true, + List: true, + Add: true, + Create: true, + Update: true, + Process: true, + Tag: true, + FilterByTags: true, + PermanentDelete: true, + } + + expiry := time.Now().Add(time.Hour) + _, err = serviceClient.GetSASURL(resources, permissions, expiry, nil) + _require.Equal(err.Error(), "SAS can only be signed with a SharedKeyCredential") +} + +func (s *ServiceUnrecordedTestsSuite) TestSASServiceClientSignNegative() { + _require := require.New(s.T()) + accountName := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME") + accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_KEY") + cred, err := azdatalake.NewSharedKeyCredential(accountName, accountKey) + _require.Nil(err) + + serviceClient, err := service.NewClientWithSharedKeyCredential(fmt.Sprintf("https://%s.blob.core.windows.net/", accountName), cred, nil) + _require.Nil(err) + resources := sas.AccountResourceTypes{ + Object: true, + Service: true, + Container: true, + } + permissions := sas.AccountPermissions{ + Read: true, + Write: true, + Delete: true, + DeletePreviousVersion: true, + List: true, + Add: true, + Create: true, + Update: true, + Process: true, + Tag: true, + FilterByTags: true, + PermanentDelete: true, + } + expiry := time.Time{} + _, err = serviceClient.GetSASURL(resources, permissions, expiry, nil) + _require.Equal(err.Error(), "account SAS is missing at least one of these: ExpiryTime, Permissions, Service, or ResourceType") +} + +func (s *ServiceUnrecordedTestsSuite) TestNoSharedKeyCredError() { + _require := require.New(s.T()) + accountName := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME") + + // Creating service client without credentials + serviceClient, err := service.NewClientWithNoCredential(fmt.Sprintf("https://%s.blob.core.windows.net/", accountName), nil) + _require.Nil(err) + + // Adding SAS and options + resources := sas.AccountResourceTypes{ + Object: true, + Service: true, + Container: true, + } + permissions := sas.AccountPermissions{ + Read: true, + Add: true, + Write: true, + Create: true, + Update: true, + Delete: true, + } + start := time.Now().Add(-time.Hour) + expiry := start.Add(time.Hour) + opts := service.GetSASURLOptions{StartTime: &start} + + // GetSASURL fails (with MissingSharedKeyCredential) because service client is created without credentials + _, err = serviceClient.GetSASURL(resources, permissions, expiry, &opts) + _require.Equal(err, datalakeerror.MissingSharedKeyCredential) + +} + +func (s *ServiceRecordedTestsSuite) TestSASFilesystemClient() { + _require := require.New(s.T()) + testName := s.T().Name() + accountName := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME") + accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_KEY") + cred, err := azdatalake.NewSharedKeyCredential(accountName, accountKey) + _require.Nil(err) + + serviceClient, err := service.NewClientWithSharedKeyCredential(fmt.Sprintf("https://%s.blob.core.windows.net/", accountName), cred, nil) + _require.Nil(err) + + fsName := testcommon.GenerateFilesystemName(testName) + fsClient := serviceClient.NewFilesystemClient(fsName) + + permissions := sas.FilesystemPermissions{ + Read: true, + Add: true, + } + start := time.Now().Add(-5 * time.Minute).UTC() + expiry := time.Now().Add(time.Hour) + + opts := filesystem.GetSASURLOptions{StartTime: &start} + sasUrl, err := fsClient.GetSASURL(permissions, expiry, &opts) + _require.Nil(err) + + fsClient2, err := filesystem.NewClientWithNoCredential(sasUrl, nil) + _require.Nil(err) + + _, err = fsClient2.Create(context.Background(), &filesystem.CreateOptions{Metadata: testcommon.BasicMetadata}) + _require.NotNil(err) + testcommon.ValidateBlobErrorCode(_require, err, datalakeerror.AuthorizationFailure) +} + +func (s *ServiceRecordedTestsSuite) TestSASFilesystem2() { + _require := require.New(s.T()) + testName := s.T().Name() + accountName := os.Getenv("AZURE_STORAGE_ACCOUNT_NAME") + accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT_KEY") + cred, err := azdatalake.NewSharedKeyCredential(accountName, accountKey) + _require.Nil(err) + + serviceClient, err := service.NewClientWithSharedKeyCredential(fmt.Sprintf("https://%s.blob.core.windows.net/", accountName), cred, nil) + _require.Nil(err) + + fsName := testcommon.GenerateFilesystemName(testName) + fsClient := serviceClient.NewFilesystemClient(fsName) + start := time.Now().Add(-5 * time.Minute).UTC() + opts := filesystem.GetSASURLOptions{StartTime: &start} + + sasUrlReadAdd, err := fsClient.GetSASURL(sas.FilesystemPermissions{Read: true, Add: true}, time.Now().Add(time.Hour), &opts) + _require.Nil(err) + _, err = fsClient.Create(context.Background(), &filesystem.CreateOptions{Metadata: testcommon.BasicMetadata}) + _require.Nil(err) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient) + + fsClient1, err := filesystem.NewClientWithNoCredential(sasUrlReadAdd, nil) + _require.Nil(err) + + // filesystem metadata and properties can't be read or written with SAS auth + _, err = fsClient1.GetProperties(context.Background(), nil) + _require.Error(err) + testcommon.ValidateBlobErrorCode(_require, err, datalakeerror.AuthorizationFailure) + + start = time.Now().Add(-5 * time.Minute).UTC() + opts = filesystem.GetSASURLOptions{StartTime: &start} + + sasUrlRCWL, err := fsClient.GetSASURL(sas.FilesystemPermissions{Add: true, Create: true, Delete: true, List: true}, time.Now().Add(time.Hour), &opts) + _require.Nil(err) + + fsClient2, err := filesystem.NewClientWithNoCredential(sasUrlRCWL, nil) + _require.Nil(err) + + // filesystems can't be created, deleted, or listed with SAS auth + _, err = fsClient2.Create(context.Background(), nil) + _require.Error(err) + testcommon.ValidateBlobErrorCode(_require, err, datalakeerror.AuthorizationFailure) +} + +func (s *ServiceRecordedTestsSuite) TestListFilesystemsBasic() { + _require := require.New(s.T()) + testName := s.T().Name() + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDatalake, nil) + _require.Nil(err) + md := map[string]*string{ + "foo": to.Ptr("foovalue"), + "bar": to.Ptr("barvalue"), + } + + fsName := testcommon.GenerateFilesystemName(testName) + fsClient := testcommon.ServiceGetFilesystemClient(fsName, svcClient) + _, err = fsClient.Create(context.Background(), &filesystem.CreateOptions{Metadata: md}) + defer func(fsClient *filesystem.Client, ctx context.Context, options *filesystem.DeleteOptions) { + _, err := fsClient.Delete(ctx, options) + if err != nil { + _require.Nil(err) + } + }(fsClient, context.Background(), nil) + _require.Nil(err) + prefix := testcommon.FilesystemPrefix + listOptions := service.ListFilesystemsOptions{Prefix: &prefix, Include: service.ListFilesystemsInclude{Metadata: true}} + pager := svcClient.NewListFilesystemsPager(&listOptions) + + count := 0 + for pager.More() { + resp, err := pager.NextPage(context.Background()) + _require.Nil(err) + for _, ctnr := range resp.Filesystems { + _require.NotNil(ctnr.Name) + + if *ctnr.Name == fsName { + _require.NotNil(ctnr.Properties) + _require.NotNil(ctnr.Properties.LastModified) + _require.NotNil(ctnr.Properties.ETag) + _require.Equal(*ctnr.Properties.LeaseStatus, lease.StatusTypeUnlocked) + _require.Equal(*ctnr.Properties.LeaseState, lease.StateTypeAvailable) + _require.Nil(ctnr.Properties.LeaseDuration) + _require.Nil(ctnr.Properties.PublicAccess) + _require.NotNil(ctnr.Metadata) + + unwrappedMeta := map[string]*string{} + for k, v := range ctnr.Metadata { + if v != nil { + unwrappedMeta[k] = v + } + } + + _require.EqualValues(unwrappedMeta, md) + } + } + if err != nil { + break + } + } + + _require.Nil(err) + _require.GreaterOrEqual(count, 0) +} + +func (s *ServiceRecordedTestsSuite) TestListFilesystemsBasicUsingConnectionString() { + _require := require.New(s.T()) + testName := s.T().Name() + svcClient, err := testcommon.GetServiceClientFromConnectionString(s.T(), testcommon.TestAccountDefault, nil) + _require.Nil(err) + md := map[string]*string{ + "foo": to.Ptr("foovalue"), + "bar": to.Ptr("barvalue"), + } + + fsName := testcommon.GenerateFilesystemName(testName) + fsClient := testcommon.ServiceGetFilesystemClient(fsName, svcClient) + _, err = fsClient.Create(context.Background(), &filesystem.CreateOptions{Metadata: md}) + defer func(fsClient *filesystem.Client, ctx context.Context, options *filesystem.DeleteOptions) { + _, err := fsClient.Delete(ctx, options) + if err != nil { + _require.Nil(err) + } + }(fsClient, context.Background(), nil) + _require.Nil(err) + prefix := testcommon.FilesystemPrefix + listOptions := service.ListFilesystemsOptions{Prefix: &prefix, Include: service.ListFilesystemsInclude{Metadata: true}} + pager := svcClient.NewListFilesystemsPager(&listOptions) + + count := 0 + for pager.More() { + resp, err := pager.NextPage(context.Background()) + _require.Nil(err) + + for _, ctnr := range resp.Filesystems { + _require.NotNil(ctnr.Name) + + if *ctnr.Name == fsName { + _require.NotNil(ctnr.Properties) + _require.NotNil(ctnr.Properties.LastModified) + _require.NotNil(ctnr.Properties.ETag) + _require.Equal(*ctnr.Properties.LeaseStatus, lease.StatusTypeUnlocked) + _require.Equal(*ctnr.Properties.LeaseState, lease.StateTypeAvailable) + _require.Nil(ctnr.Properties.LeaseDuration) + _require.Nil(ctnr.Properties.PublicAccess) + _require.NotNil(ctnr.Metadata) + + unwrappedMeta := map[string]*string{} + for k, v := range ctnr.Metadata { + if v != nil { + unwrappedMeta[k] = v + } + } + + _require.EqualValues(unwrappedMeta, md) + } + } + if err != nil { + break + } + } + + _require.Nil(err) + _require.GreaterOrEqual(count, 0) +} + +func (s *ServiceRecordedTestsSuite) TestListFilesystemsPaged() { + _require := require.New(s.T()) + testName := s.T().Name() + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDefault, nil) + _require.Nil(err) + const numFilesystems = 6 + maxResults := int32(2) + const pagedFilesystemsPrefix = "azfilesystempaged" + + filesystems := make([]*filesystem.Client, numFilesystems) + expectedResults := make(map[string]bool) + for i := 0; i < numFilesystems; i++ { + fsName := pagedFilesystemsPrefix + testcommon.GenerateFilesystemName(testName) + fmt.Sprintf("%d", i) + fsClient := testcommon.CreateNewFilesystem(context.Background(), _require, fsName, svcClient) + filesystems[i] = fsClient + expectedResults[fsName] = false + } + + defer func() { + for i := range filesystems { + testcommon.DeleteFilesystem(context.Background(), _require, filesystems[i]) + } + }() + + prefix := pagedFilesystemsPrefix + testcommon.FilesystemPrefix + listOptions := service.ListFilesystemsOptions{MaxResults: &maxResults, Prefix: &prefix, Include: service.ListFilesystemsInclude{Metadata: true}} + count := 0 + results := make([]service.FilesystemItem, 0) + pager := svcClient.NewListFilesystemsPager(&listOptions) + + for pager.More() { + resp, err := pager.NextPage(context.Background()) + _require.Nil(err) + for _, ctnr := range resp.Filesystems { + _require.NotNil(ctnr.Name) + results = append(results, *ctnr) + count += 1 + } + } + + _require.Equal(count, numFilesystems) + _require.Equal(len(results), numFilesystems) + + // make sure each fs we see is expected + for _, ctnr := range results { + _, ok := expectedResults[*ctnr.Name] + _require.Equal(ok, true) + expectedResults[*ctnr.Name] = true + } + + // make sure every expected fs was seen + for _, seen := range expectedResults { + _require.Equal(seen, true) + } + +} + +func (s *ServiceRecordedTestsSuite) TestAccountListFilesystemsEmptyPrefix() { + _require := require.New(s.T()) + testName := s.T().Name() + svcClient, err := testcommon.GetServiceClient(s.T(), testcommon.TestAccountDefault, nil) + _require.NoError(err) + + fsClient1 := testcommon.CreateNewFilesystem(context.Background(), _require, testcommon.GenerateFilesystemName(testName)+"1", svcClient) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient1) + fsClient2 := testcommon.CreateNewFilesystem(context.Background(), _require, testcommon.GenerateFilesystemName(testName)+"2", svcClient) + defer testcommon.DeleteFilesystem(context.Background(), _require, fsClient2) + + count := 0 + pager := svcClient.NewListFilesystemsPager(nil) + + for pager.More() { + resp, err := pager.NextPage(context.Background()) + _require.Nil(err) + + for _, container := range resp.Filesystems { + count++ + _require.NotNil(container.Name) + } + if err != nil { + break + } + } + _require.GreaterOrEqual(count, 2) +} diff --git a/sdk/storage/azdatalake/service/models.go b/sdk/storage/azdatalake/service/models.go index bca2a179f73a..7f8fb9c9bb6b 100644 --- a/sdk/storage/azdatalake/service/models.go +++ b/sdk/storage/azdatalake/service/models.go @@ -7,10 +7,14 @@ package service import ( + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/lease" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/sas" + "time" ) +import blobSAS "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas" type CreateFilesystemOptions = filesystem.CreateOptions @@ -22,6 +26,9 @@ type DeleteFilesystemOptions = filesystem.DeleteOptions // domain) to call APIs in another domain. type CORSRule = service.CORSRule +// StorageServiceProperties - Storage Service Properties. +type StorageServiceProperties = service.StorageServiceProperties + // RetentionPolicy - the retention policy which determines how long the associated data should persist. type RetentionPolicy = service.RetentionPolicy @@ -34,13 +41,31 @@ type Logging = service.Logging // StaticWebsite - The properties that enable an account to host a static website. type StaticWebsite = service.StaticWebsite +// SharedKeyCredential contains an account's name and its primary or secondary key. +type SharedKeyCredential = exported.SharedKeyCredential + +// GetUserDelegationCredentialOptions contains the optional parameters for the Client.GetUserDelegationCredential method. +type GetUserDelegationCredentialOptions = service.GetUserDelegationCredentialOptions + +// KeyInfo contains KeyInfo struct. +type KeyInfo = service.KeyInfo + +// UserDelegationCredential contains an account's name and its user delegation key. +type UserDelegationCredential = exported.UserDelegationCredential + +// UserDelegationKey contains UserDelegationKey. +type UserDelegationKey = exported.UserDelegationKey + // GetPropertiesOptions contains the optional parameters for the Client.GetProperties method. type GetPropertiesOptions struct { // placeholder for future options } func (o *GetPropertiesOptions) format() *service.GetPropertiesOptions { - return nil + if o == nil { + return nil + } + return &service.GetPropertiesOptions{} } // SetPropertiesOptions provides set of options for Client.SetProperties @@ -72,7 +97,18 @@ type SetPropertiesOptions struct { } func (o *SetPropertiesOptions) format() *service.SetPropertiesOptions { - return nil + if o == nil { + return nil + } + return &service.SetPropertiesOptions{ + CORS: o.CORS, + DefaultServiceVersion: o.DefaultServiceVersion, + DeleteRetentionPolicy: o.DeleteRetentionPolicy, + HourMetrics: o.HourMetrics, + Logging: o.Logging, + MinuteMetrics: o.MinuteMetrics, + StaticWebsite: o.StaticWebsite, + } } // ListFilesystemsInclude indicates what additional information the service should return with each filesystem. @@ -84,6 +120,16 @@ type ListFilesystemsInclude struct { Deleted bool } +func (o *ListFilesystemsInclude) format() service.ListContainersInclude { + if o == nil { + return service.ListContainersInclude{} + } + return service.ListContainersInclude{ + Metadata: o.Metadata, + Deleted: o.Deleted, + } +} + // ListFilesystemsOptions contains the optional parameters for the Client.List method. type ListFilesystemsOptions struct { Include ListFilesystemsInclude @@ -92,7 +138,55 @@ type ListFilesystemsOptions struct { Prefix *string } -// TODO: Design formatter to convert to blob +func (o *ListFilesystemsOptions) format() *service.ListContainersOptions { + if o == nil { + return nil + } + return &service.ListContainersOptions{ + Include: o.Include.format(), + Marker: o.Marker, + MaxResults: o.MaxResults, + Prefix: o.Prefix, + } +} -// SharedKeyCredential contains an account's name and its primary or secondary key. -type SharedKeyCredential = exported.SharedKeyCredential +// GetSASURLOptions contains the optional parameters for the Client.GetSASURL method. +type GetSASURLOptions struct { + StartTime *time.Time +} + +func (o *GetSASURLOptions) format(resources sas.AccountResourceTypes, permissions sas.AccountPermissions) (blobSAS.AccountResourceTypes, blobSAS.AccountPermissions, *service.GetSASURLOptions) { + res := blobSAS.AccountResourceTypes{ + Service: resources.Service, + Container: resources.Container, + Object: resources.Object, + } + perms := blobSAS.AccountPermissions{ + Read: permissions.Read, + Write: permissions.Write, + Delete: permissions.Delete, + List: permissions.List, + Add: permissions.Add, + Create: permissions.Create, + Update: permissions.Update, + Process: permissions.Process, + } + if o == nil { + return res, perms, nil + } + + return res, perms, &service.GetSASURLOptions{ + StartTime: o.StartTime, + } +} + +// listing response models +// TODO: find another way to import these + +type LeaseDurationType = lease.DurationType + +type LeaseStateType = lease.StateType + +type LeaseStatusType = lease.StatusType + +type PublicAccessType = filesystem.PublicAccessType diff --git a/sdk/storage/azdatalake/service/responses.go b/sdk/storage/azdatalake/service/responses.go index e9393cbdbee3..377532f3488f 100644 --- a/sdk/storage/azdatalake/service/responses.go +++ b/sdk/storage/azdatalake/service/responses.go @@ -10,8 +10,11 @@ package service import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" + "time" ) // CreateFilesystemResponse contains the response fields for the CreateFilesystem operation. @@ -26,5 +29,96 @@ type SetPropertiesResponse = service.SetPropertiesResponse // GetPropertiesResponse contains the response fields for the GetProperties operation. type GetPropertiesResponse = service.GetPropertiesResponse -// ListFilesystemsResponse contains the response fields for the ListFilesystems operation. -type ListFilesystemsResponse = service.ListContainersResponse +// TODO: use below after implementing listing + +type ListFilesystemsResponse struct { + ListFilesystemsSegmentResponse + // ClientRequestID contains the information returned from the x-ms-client-request-id header response. + ClientRequestID *string + + // RequestID contains the information returned from the x-ms-request-id header response. + RequestID *string + + // Version contains the information returned from the x-ms-version header response. + Version *string + + blobPager *runtime.Pager[service.ListContainersResponse] +} + +// ListFilesystemsSegmentResponse - An enumeration of containers +type ListFilesystemsSegmentResponse struct { + // REQUIRED + Filesystems []*FilesystemItem + + // REQUIRED + ServiceEndpoint *string + Marker *string + MaxResults *int32 + NextMarker *string + Prefix *string +} + +// FilesystemItem - An Azure Storage filesystem +type FilesystemItem struct { + // REQUIRED + Name *string + + // REQUIRED; Properties of a container + Properties *FilesystemProperties + Deleted *bool + + // Dictionary of + Metadata map[string]*string + Version *string +} + +// FilesystemProperties - Properties of a filesystem +type FilesystemProperties struct { + // REQUIRED + ETag *azcore.ETag + + // REQUIRED + LastModified *time.Time + DefaultEncryptionScope *string + DeletedTime *time.Time + HasImmutabilityPolicy *bool + HasLegalHold *bool + + // Indicates if version level worm is enabled on this container. + IsImmutableStorageWithVersioningEnabled *bool + LeaseDuration *LeaseDurationType + LeaseState *LeaseStateType + LeaseStatus *LeaseStatusType + PreventEncryptionScopeOverride *bool + PublicAccess *PublicAccessType + RemainingRetentionDays *int32 +} + +// converter from container items to filesystem items +func convertContainerItemsToFSItems(items []*service.ContainerItem) []*FilesystemItem { + var filesystemItems []*FilesystemItem + for _, item := range items { + filesystemItems = append(filesystemItems, &FilesystemItem{ + Name: item.Name, + Properties: &FilesystemProperties{ + LastModified: item.Properties.LastModified, + ETag: item.Properties.ETag, + DefaultEncryptionScope: item.Properties.DefaultEncryptionScope, + LeaseStatus: item.Properties.LeaseStatus, + LeaseState: item.Properties.LeaseState, + LeaseDuration: item.Properties.LeaseDuration, + PublicAccess: item.Properties.PublicAccess, + HasImmutabilityPolicy: item.Properties.HasImmutabilityPolicy, + HasLegalHold: item.Properties.HasLegalHold, + IsImmutableStorageWithVersioningEnabled: item.Properties.IsImmutableStorageWithVersioningEnabled, + PreventEncryptionScopeOverride: item.Properties.PreventEncryptionScopeOverride, + RemainingRetentionDays: item.Properties.RemainingRetentionDays, + DeletedTime: item.Properties.DeletedTime, + }, + Deleted: item.Deleted, + Metadata: item.Metadata, + Version: item.Version, + }) + } + return filesystemItems +}