Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't use S3 Sdk - SignatureDoesNotMatch #495

Closed
2 tasks done
kallebysantos opened this issue May 31, 2024 · 10 comments
Closed
2 tasks done

Can't use S3 Sdk - SignatureDoesNotMatch #495

kallebysantos opened this issue May 31, 2024 · 10 comments
Labels
bug Something isn't working

Comments

@kallebysantos
Copy link

Bug report

  • I confirm this is a bug with Supabase, not with my own application.
  • I confirm I have searched the Docs, GitHub Discussions, and Discord.

Describe the bug

I'm trying to use S3 Sdk to perform operations on Supabase Storage. But every Command that I try simple doesn't works and result in 403: SignatureDoesNotMatch error. But if I call to Minio endpoint instead it works.

To Reproduce

Run docker-compose.s3.yml with the following:

code
services:

  minio:
    image: minio/minio
    ports:
      - '9000:9000'
      - '9001:9001'
    environment:
      MINIO_ROOT_USER: supa-storage
      MINIO_ROOT_PASSWORD: secret1234
    command: server --console-address ":9001" /data
    healthcheck:
      test: [ "CMD", "curl", "-f", "http://minio:9000/minio/health/live" ]
      interval: 2s
      timeout: 10s
      retries: 5
    volumes:
      - ./volumes/storage:/data:z

  minio-createbucket:
    image: minio/mc
    depends_on:
      minio:
        condition: service_healthy
    entrypoint: >
      /bin/sh -c "
      /usr/bin/mc alias set supa-minio http://minio:9000 supa-storage secret1234;
      /usr/bin/mc mb supa-minio/stub;
      exit 0;
      "

  storage:
    image: supabase/storage-api:v1.3.1 #supabase/storage-api:v0.43.11
    depends_on:
      db:
        # Disable this if you are using an external Postgres database
        condition: service_healthy
      rest:
        condition: service_started
      imgproxy:
        condition: service_started
      minio:
        condition: service_healthy
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://localhost:5000/status"
        ]
      timeout: 5s
      interval: 5s
      retries: 3
    restart: unless-stopped
    environment:
      ANON_KEY: ${ANON_KEY}
      SERVICE_KEY: ${SERVICE_ROLE_KEY}
      POSTGREST_URL: http://rest:3000
      PGRST_JWT_SECRET: ${JWT_SECRET}
      DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
      FILE_SIZE_LIMIT: 52428800
      STORAGE_BACKEND: s3
      GLOBAL_S3_BUCKET: stub
      GLOBAL_S3_ENDPOINT: http://minio:9000
      GLOBAL_S3_PROTOCOL: http
      GLOBAL_S3_FORCE_PATH_STYLE: true
      S3_PROTOCOL_ACCESS_KEY_ID: supa-storage
      S3_PROTOCOL_ACCESS_KEY_SECRET: secret1234
      AWS_ACCESS_KEY_ID: supa-storage
      AWS_SECRET_ACCESS_KEY: secret1234
      AWS_DEFAULT_REGION: stub
      FILE_STORAGE_BACKEND_PATH: /var/lib/storage
      TUS_USE_FILE_VERSION_SEPARATOR: true
      TENANT_ID: stub
      IS_MULTITENANT: false
      # TODO: https://github.com/supabase/storage-api/issues/55
      REGION: stub
      STORAGE_S3_REGION: stub
      ENABLE_IMAGE_TRANSFORMATION: "true"
      IMGPROXY_URL: http://imgproxy:5001
    volumes:
      - ./supabase/volumes/storage:/var/lib/storage:z

  imgproxy:
    image: darthsim/imgproxy:v3.8.0
    healthcheck:
      test: [ "CMD", "imgproxy", "health" ]
      timeout: 5s
      interval: 5s
      retries: 3
    environment:
      IMGPROXY_BIND: ":5001"
      IMGPROXY_USE_ETAG: "true"
      IMGPROXY_ENABLE_WEBP_DETECTION: ${IMGPROXY_ENABLE_WEBP_DETECTION}

Then I copy the following code from supabase storage tests:

  const s3Client = new S3Client({
    logger: console,
    forcePathStyle: true,
    region: "stub",
    endpoint: "http://localhost:5000/storage/v1/s3", // -> Don't work
    // endpoint: "http://localhost:9000", // -> It Works
    credentials: {
      accessKeyId: "supa-storage",
      secretAccessKey: "secret1234",
    },
  });

  const createBucketRequest = new CreateBucketCommand({
    Bucket: randomUUID(),
    ACL: "public-read",
  });

  const { Location } = await s3Client.send(createBucketRequest);
  console.log("Location", Location);

Current behaviour

Application Logs:
endpoints Initial EndpointParams: {
  "ForcePathStyle": true,
  "UseArnRegion": false,
  "DisableMultiRegionAccessPoints": false,
  "Accelerate": false,
  "DisableS3ExpressSessionAuth": false,
  "UseGlobalEndpoint": false,
  "UseFIPS": false,
  "Endpoint": "http://localhost:5000/storage/v1/s3",
  "Region": "stub",
  "UseDualStack": false,
  "UseS3ExpressControlEndpoint": true,
  "DisableAccessPoints": true,
  "Bucket": "a25af482-0a92-4036-aef3-1a9125389070"
}
endpoints evaluateCondition: isSet($Region) = true
endpoints evaluateCondition: booleanEquals($Accelerate, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: booleanEquals($UseFIPS, true) = false
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: booleanEquals($Accelerate, true) = false
endpoints evaluateCondition: booleanEquals($UseFIPS, true) = false
endpoints evaluateCondition: isSet($Bucket) = true
endpoints evaluateCondition: substring($Bucket, 0, 6, true) = 389070
endpoints assign: bucketSuffix := 389070
endpoints evaluateCondition: stringEquals($bucketSuffix, --x-s3) = false
endpoints evaluateCondition: not(isSet($Bucket)) = false
endpoints evaluateCondition: isSet($Bucket) = true
endpoints evaluateCondition: substring($Bucket, 49, 50, true) = null
endpoints evaluateCondition: isSet($Bucket) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: not(isSet(parseURL($Endpoint))) = false
endpoints evaluateCondition: booleanEquals($ForcePathStyle, false) = false
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints evaluateCondition: stringEquals(getAttr($url, scheme), http) = true
endpoints evaluateCondition: aws.isVirtualHostableS3Bucket($Bucket, true) = true
endpoints evaluateCondition: booleanEquals($ForcePathStyle, false) = false
endpoints evaluateCondition: booleanEquals($ForcePathStyle, false) = false
endpoints evaluateCondition: substring($Bucket, 0, 4, false) = a25a
endpoints assign: arnPrefix := a25a
endpoints evaluateCondition: stringEquals($arnPrefix, arn:) = false
endpoints evaluateCondition: booleanEquals($ForcePathStyle, true) = true
endpoints evaluateCondition: aws.parseArn($Bucket) = null
endpoints evaluateCondition: uriEncode($Bucket) = a25af482-0a92-4036-aef3-1a9125389070
endpoints assign: uri_encoded_bucket := a25af482-0a92-4036-aef3-1a9125389070
endpoints evaluateCondition: aws.partition($Region) = {
  "dnsSuffix": "amazonaws.com",
  "dualStackDnsSuffix": "api.aws",
  "implicitGlobalRegion": "us-east-1",
  "name": "aws",
  "supportsDualStack": true,
  "supportsFIPS": true
}
endpoints assign: partitionResult := {
  "dnsSuffix": "amazonaws.com",
  "dualStackDnsSuffix": "api.aws",
  "implicitGlobalRegion": "us-east-1",
  "name": "aws",
  "supportsDualStack": true,
  "supportsFIPS": true
}
endpoints evaluateCondition: booleanEquals($Accelerate, false) = true
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: not(isSet($Endpoint)) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: not(isSet($Endpoint)) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: not(isSet($Endpoint)) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints evaluateCondition: booleanEquals($UseFIPS, false) = true
endpoints evaluateCondition: stringEquals($Region, aws-global) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints evaluateCondition: booleanEquals($UseFIPS, false) = true
endpoints evaluateCondition: not(stringEquals($Region, aws-global)) = true
endpoints evaluateCondition: booleanEquals($UseGlobalEndpoint, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:5000",
  "path": "/storage/v1/s3",
  "normalizedPath": "/storage/v1/s3/",
  "isIp": false
}
endpoints evaluateCondition: booleanEquals($UseFIPS, false) = true
endpoints evaluateCondition: not(stringEquals($Region, aws-global)) = true
endpoints evaluateCondition: booleanEquals($UseGlobalEndpoint, false) = true
endpoints Resolving endpoint from template: {
  "url": "{url#scheme}://{url#authority}{url#normalizedPath}{uri_encoded_bucket}",
  "properties": {
    "authSchemes": [
      {
        "disableDoubleEncoding": true,
        "name": "sigv4",
        "signingName": "s3",
        "signingRegion": "{Region}"
      }
    ]
  },
  "headers": {}
}
endpoints Resolved endpoint: {
  "headers": {},
  "properties": {
    "authSchemes": [
      {
        "disableDoubleEncoding": true,
        "name": "sigv4",
        "signingName": "s3",
        "signingRegion": "stub"
      }
    ]
  },
  "url": "http://localhost:5000/storage/v1/s3/a25af482-0a92-4036-aef3-1a9125389070"
}
{
  clientName: "S3Client",
  commandName: "CreateBucketCommand",
  input: {
    Bucket: "a25af482-0a92-4036-aef3-1a9125389070",
    ACL: "public-read"
  },
  error: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
    at throwDefaultError (file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/smithy-client/3.0.1/dist-cjs/index.js:838:20)
    at file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/smithy-client/3.0.1/dist-cjs/index.js:847:5
    at de_CommandError (file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@aws-sdk/client-s3/3.583.0_1/dist-cjs/index.js:4748:14)
    at Object.runMicrotasks (ext:core/01_core.js:642:26)
    at processTicksAndRejections (ext:deno_node/_next_tick.ts:39:10)
    at runNextTicks (ext:deno_node/_next_tick.ts:48:3)
    at eventLoopTick (ext:core/01_core.js:175:21)
    at async file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/middleware-serde/3.0.0/dist-cjs/index.js:35:20
    at async file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@aws-sdk/middleware-signing/3.577.0/dist-cjs/index.js:226:18
    at async file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/middleware-retry/3.0.1/dist-cjs/index.js:320:38 {
    name: "SignatureDoesNotMatch",
    "$fault": "client",
    "$metadata": {
      httpStatusCode: 403,
      requestId: undefined,
      extendedRequestId: undefined,
      cfId: undefined,
      attempts: 1,
      totalRetryDelay: 0
    },
    Resource: "a25af482-0a92-4036-aef3-1a9125389070",
    Code: "SignatureDoesNotMatch",
    message: "The request signature we calculated does not match the signature you provided. Check your key and si"... 13 more characters
  },
  metadata: {
    httpStatusCode: 403,
    requestId: undefined,
    extendedRequestId: undefined,
    cfId: undefined,
    attempts: 1,
    totalRetryDelay: 0
  }
}
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
    at throwDefaultError (file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/smithy-client/3.0.1/dist-cjs/index.js:838:20)
    at file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/smithy-client/3.0.1/dist-cjs/index.js:847:5
    at de_CommandError (file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@aws-sdk/client-s3/3.583.0_1/dist-cjs/index.js:4748:14)
    at Object.runMicrotasks (ext:core/01_core.js:642:26)
    at processTicksAndRejections (ext:deno_node/_next_tick.ts:39:10)
    at runNextTicks (ext:deno_node/_next_tick.ts:48:3)
    at eventLoopTick (ext:core/01_core.js:175:21)
    at async file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/middleware-serde/3.0.0/dist-cjs/index.js:35:20
    at async file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@aws-sdk/middleware-signing/3.577.0/dist-cjs/index.js:226:18
    at async file:///home/kalleby/.cache/deno/npm/registry.npmjs.org/@smithy/middleware-retry/3.0.1/dist-cjs/index.js:320:38 {
  name: "SignatureDoesNotMatch",
  "$fault": "client",
  "$metadata": {
    httpStatusCode: 403,
    requestId: undefined,
    extendedRequestId: undefined,
    cfId: undefined,
    attempts: 1,
    totalRetryDelay: 0
  },
  Resource: "a25af482-0a92-4036-aef3-1a9125389070",
  Code: "SignatureDoesNotMatch",
  message: "The request signature we calculated does not match the signature you provided. Check your key and si"... 13 more characters
}
Supabase Storage Logs:
{
  "level": 40,
  "time": "2024-05-31T09:22:39.510Z",
  "pid": 1,
  "hostname": "eb7efc6421e9",
  "reqId": "req-2",
  "tenantId": "stub",
  "project": "stub",
  "type": "request",
  "req": {
    "region": "stub",
    "traceId": "req-2",
    "method": "PUT",
    "url": "/s3/a25af482-0a92-4036-aef3-1a9125389070/",
    "headers": {
      "host": "storage:5000",
      "x_forwarded_proto": "http",
      "x_forwarded_host": "localhost",
      "x_forwarded_port": "8000",
      "x_real_ip": "172.18.0.1",
      "content_length": "186",
      "content_type": "application/xml",
      "user_agent": "aws-sdk-js/3.583.0 ua/2.0 os/linux#5.15.146.1-microsoft-standard-WSL2 lang/js md/nodejs#20.11.1 api/s3#3.583.0",
      "accept": "*/*"
    },
    "hostname": "storage:5000",
    "remoteAddress": "172.18.0.13",
    "remotePort": 55906
  },
  "res": {
    "statusCode": 403,
    "headers": {
      "content_type": "application/xml; charset=utf-8",
      "content_length": "293"
    }
  },
  "responseTime": 7.990687999874353,
  "error": {
    "raw": "{\"metadata\":{},\"code\":\"SignatureDoesNotMatch\",\"httpStatusCode\":403,\"userStatusCode\":400}",
    "name": "Error",
    "message": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
    "stack": "Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n    at Object.SignatureDoesNotMatch (/app/dist/storage/errors.js:107:41)\n    at Object.<anonymous> (/app/dist/http/plugins/signature-v4.js:36:36)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
  },
  "resources": [
    "/a25af482-0a92-4036-aef3-1a9125389070"
  ],
  "msg": "stub | PUT | 403 | 172.18.0.13 | req-2 | /s3/a25af482-0a92-4036-aef3-1a9125389070/ | aws-sdk-js/3.583.0 ua/2.0 os/linux#5.15.146.1-microsoft-standard-WSL2 lang/js md/nodejs#20.11.1 api/s3#3.583.0"
}

Expected behavior - Calling :9000 (Minio)

Application Logs:
endpoints Initial EndpointParams: {
  "ForcePathStyle": true,
  "UseArnRegion": false,
  "DisableMultiRegionAccessPoints": false,
  "Accelerate": false,
  "DisableS3ExpressSessionAuth": false,
  "UseGlobalEndpoint": false,
  "UseFIPS": false,
  "Endpoint": "http://localhost:9000/",
  "Region": "stub",
  "UseDualStack": false,
  "UseS3ExpressControlEndpoint": true,
  "DisableAccessPoints": true,
  "Bucket": "a9766c7a-e0ab-4a25-a5fe-ad7279813895"
}
endpoints evaluateCondition: isSet($Region) = true
endpoints evaluateCondition: booleanEquals($Accelerate, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: booleanEquals($UseFIPS, true) = false
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: booleanEquals($Accelerate, true) = false
endpoints evaluateCondition: booleanEquals($UseFIPS, true) = false
endpoints evaluateCondition: isSet($Bucket) = true
endpoints evaluateCondition: substring($Bucket, 0, 6, true) = 813895
endpoints assign: bucketSuffix := 813895
endpoints evaluateCondition: stringEquals($bucketSuffix, --x-s3) = false
endpoints evaluateCondition: not(isSet($Bucket)) = false
endpoints evaluateCondition: isSet($Bucket) = true
endpoints evaluateCondition: substring($Bucket, 49, 50, true) = null
endpoints evaluateCondition: isSet($Bucket) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: not(isSet(parseURL($Endpoint))) = false
endpoints evaluateCondition: booleanEquals($ForcePathStyle, false) = false
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints evaluateCondition: stringEquals(getAttr($url, scheme), http) = true
endpoints evaluateCondition: aws.isVirtualHostableS3Bucket($Bucket, true) = true
endpoints evaluateCondition: booleanEquals($ForcePathStyle, false) = false
endpoints evaluateCondition: booleanEquals($ForcePathStyle, false) = false
endpoints evaluateCondition: substring($Bucket, 0, 4, false) = a976
endpoints assign: arnPrefix := a976
endpoints evaluateCondition: stringEquals($arnPrefix, arn:) = false
endpoints evaluateCondition: booleanEquals($ForcePathStyle, true) = true
endpoints evaluateCondition: aws.parseArn($Bucket) = null
endpoints evaluateCondition: uriEncode($Bucket) = a9766c7a-e0ab-4a25-a5fe-ad7279813895
endpoints assign: uri_encoded_bucket := a9766c7a-e0ab-4a25-a5fe-ad7279813895
endpoints evaluateCondition: aws.partition($Region) = {
  "dnsSuffix": "amazonaws.com",
  "dualStackDnsSuffix": "api.aws",
  "implicitGlobalRegion": "us-east-1",
  "name": "aws",
  "supportsDualStack": true,
  "supportsFIPS": true
}
endpoints assign: partitionResult := {
  "dnsSuffix": "amazonaws.com",
  "dualStackDnsSuffix": "api.aws",
  "implicitGlobalRegion": "us-east-1",
  "name": "aws",
  "supportsDualStack": true,
  "supportsFIPS": true
}
endpoints evaluateCondition: booleanEquals($Accelerate, false) = true
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: not(isSet($Endpoint)) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: not(isSet($Endpoint)) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: not(isSet($Endpoint)) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints evaluateCondition: booleanEquals($UseFIPS, false) = true
endpoints evaluateCondition: stringEquals($Region, aws-global) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints evaluateCondition: booleanEquals($UseFIPS, false) = true
endpoints evaluateCondition: not(stringEquals($Region, aws-global)) = true
endpoints evaluateCondition: booleanEquals($UseGlobalEndpoint, true) = false
endpoints evaluateCondition: booleanEquals($UseDualStack, false) = true
endpoints evaluateCondition: isSet($Endpoint) = true
endpoints evaluateCondition: parseURL($Endpoint) = {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints assign: url := {
  "scheme": "http",
  "authority": "localhost:9000",
  "path": "/",
  "normalizedPath": "/",
  "isIp": false
}
endpoints evaluateCondition: booleanEquals($UseFIPS, false) = true
endpoints evaluateCondition: not(stringEquals($Region, aws-global)) = true
endpoints evaluateCondition: booleanEquals($UseGlobalEndpoint, false) = true
endpoints Resolving endpoint from template: {
  "url": "{url#scheme}://{url#authority}{url#normalizedPath}{uri_encoded_bucket}",
  "properties": {
    "authSchemes": [
      {
        "disableDoubleEncoding": true,
        "name": "sigv4",
        "signingName": "s3",
        "signingRegion": "{Region}"
      }
    ]
  },
  "headers": {}
}
endpoints Resolved endpoint: {
  "headers": {},
  "properties": {
    "authSchemes": [
      {
        "disableDoubleEncoding": true,
        "name": "sigv4",
        "signingName": "s3",
        "signingRegion": "stub"
      }
    ]
  },
  "url": "http://localhost:9000/a9766c7a-e0ab-4a25-a5fe-ad7279813895"
}
{
  clientName: "S3Client",
  commandName: "CreateBucketCommand",
  input: {
    Bucket: "a9766c7a-e0ab-4a25-a5fe-ad7279813895",
    ACL: "public-read"
  },
  output: { Location: "/a9766c7a-e0ab-4a25-a5fe-ad7279813895" },
  metadata: {
    httpStatusCode: 200,
    requestId: "17D4884674C7FD41",
    extendedRequestId: "dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8",
    cfId: undefined,
    attempts: 1,
    totalRetryDelay: 0
  }
}
Location /a9766c7a-e0ab-4a25-a5fe-ad7279813895
Minio dashboard:

image

System information

  • OS: Windows + WSL2

Additional context

In the example provide I use the CreateBucketCommand but the same occours when I trying to upload Multipart documents, like the following snippet copied from Supabase docs:

Uploading to Supabase Storage with S3Client
export const handler: Deno.ServeHandler = async (req: Request, _info) => {
        const s3Client = new S3Client({
          logger: console,
          forcePathStyle: true,
          region: "stub",
          endpoint: "http://localhost:5000/storage/v1/s3",
          // endpoint: "http://localhost:9000",
          credentials: {
            accessKeyId: "supa-storage",
            secretAccessKey: "secret1234",
          },
        });
      
      const file = await req.blob();
      
      const uploader = new Upload({
        client: s3Client,
        params: {
          Bucket: "stub",
          Key: "mypdf.pdf"`,
          ContentType: "application/pdf",
          Body: file.stream(),
        },
      });
      
      uploader.on("httpUploadProgress", (progress) => {
        console.log(progress);
      });
      
      await uploader.done();
      
      console.log("Uploaded");
      
      return new Response(null, { status: 201 });
};

I can upload from Supabase dashboard or changing the endpoint to Minio directly, but using S3 SDK on supabase storage, always returns me the Signature Error. I'm also had try the older version supabase/storage-api:v0.43.11 but gives me 404 Errors

@kallebysantos kallebysantos added the bug Something isn't working label May 31, 2024
@fenos
Copy link
Contributor

fenos commented Jun 3, 2024

Hello,
for self-hosted Storage setup you'll need to pass the following env which sets the AWS credentials that you should use in your client:

https://github.com/supabase/storage/blob/master/docker-compose.yml#L46-L47

Let me know if that work

@kallebysantos
Copy link
Author

kallebysantos commented Jun 4, 2024

Hello, for self-hosted Storage setup you'll need to pass the following env which sets the AWS credentials that you should use in your client:

https://github.com/supabase/storage/blob/master/docker-compose.yml#L46-L47

Let me know if that work

If you check the docker-compose that I provide above, it already has theses vars. The problem is that when I call to supabase-storage from aws-sdk it throws an error.
Should I need to change my secrets for a specific value? The documentation is not clear about Self-Hosted projects.
How did you get this values? Need I create from Minio Dashboard?

@fenos
Copy link
Contributor

fenos commented Jun 4, 2024

are you running storage behind a reverse proxy?

If yes, you'd need to add the S3_PROTOCOL_PREFIX=/storage/v1 environment variable

@fenos
Copy link
Contributor

fenos commented Jun 4, 2024

If you are not, don't set the S3_PREFIX_URL, and the URL you are using to connect to storage shouldn't contain the storage/v1 use http://localhost:5000/s3 for example

const s3Client = new S3Client({
    logger: console,
    forcePathStyle: true,
    region: "stub",
    endpoint: "http://localhost:5000/s3",
    // endpoint: "http://localhost:9000",
    credentials: {
      accessKeyId: "supa-storage",
      secretAccessKey: "secret1234",
    },
  });

@kallebysantos
Copy link
Author

kallebysantos commented Jun 4, 2024

are you running storage behind a reverse proxy?

If yes, you'd need to add the S3_PROTOCOL_PREFIX=/storage/v1 environment variable

I'm running the default self-host template from supabase repo.
That uses kong as API Gateway. I just cloned the repo and tried to call with @aws-sdk

Following your instructions I'd added the follwoing to my docker-compose.s3.yml file in the storage service

# ...
S3_PROTOCOL_PREFIX: /storage/v1 # Added
# ...
S3_PROTOCOL_ACCESS_KEY_ID: supa-storage 
S3_PROTOCOL_ACCESS_KEY_SECRET: secret1234
AWS_ACCESS_KEY_ID: supa-storage
AWS_SECRET_ACCESS_KEY: secret1234

and then request with:

  const s3Client = new S3Client({
    logger: console,
    forcePathStyle: true,
    region: "stub",
    endpoint: "http://localhost:5000/storage/v1/s3",
    credentials: {
      accessKeyId: "supa-storage",
      secretAccessKey: "secret1234",
    },
  });

  const createBucketRequest = new CreateBucketCommand({
    Bucket: "test-bucket",
    ACL: "public-read",
  });

But still got the same error.

Storage container logs
{
  "level": 40,
  "time": "2024-06-04T11:41:52.996Z",
  "pid": 1,
  "hostname": "6acfe7891859",
  "reqId": "req-1",
  "tenantId": "stub",
  "project": "stub",
  "type": "request",
  "req": {
    "region": "stub",
    "traceId": "req-1",
    "method": "PUT",
    "url": "/s3/test-bucket/",
    "headers": {
      "host": "storage:5000",
      "x_forwarded_proto": "http",
      "x_forwarded_host": "localhost",
      "x_forwarded_port": "8000",
      "x_real_ip": "172.20.0.1",
      "content_length": "186",
      "content_type": "application/xml",
      "user_agent": "aws-sdk-js/3.583.0 ua/2.0 os/linux#5.15.146.1-microsoft-standard-WSL2 lang/js md/nodejs#20.11.1 api/s3#3.583.0",
      "accept": "*/*"
    },
    "hostname": "storage:5000",
    "remoteAddress": "172.20.0.13",
    "remotePort": 39424
  },
  "res": {
    "statusCode": 403,
    "headers": {
      "content_type": "application/xml; charset=utf-8",
      "content_length": "268"
    }
  },
  "responseTime": 1094.2835690006614,
  "error": {
    "raw": "{\"metadata\":{},\"code\":\"SignatureDoesNotMatch\",\"httpStatusCode\":403,\"userStatusCode\":400}",
    "name": "Error",
    "message": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
    "stack": "Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n    at Object.SignatureDoesNotMatch (/app/dist/storage/errors.js:107:41)\n    at Object.<anonymous> (/app/dist/http/plugins/signature-v4.js:36:36)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
  },
  "resources": [
    "/test-bucket"
  ],
  "msg": "stub | PUT | 403 | 172.20.0.13 | req-1 | /s3/test-bucket/ | aws-sdk-js/3.583.0 ua/2.0 os/linux#5.15.146.1-microsoft-standard-WSL2 lang/js md/nodejs#20.11.1 api/s3#3.583.0"
}

@fenos
Copy link
Contributor

fenos commented Jun 4, 2024

Right! if you use kong you'll need to add this snippet in the storage route:
https://github.com/supabase/cli/blob/develop/internal/start/templates/kong.yml#L111-L115

@fenos
Copy link
Contributor

fenos commented Jun 6, 2024

Just be more specific, you'll need to update this line on the supabase repo you have linked: https://github.com/supabase/supabase/blob/master/docker/volumes/api/kong.yml#L64

The host will be the host you are accessing Kong, in development will be localhost and the port will be the port you are accessing kong, ex: 5000

@kallebysantos let me know if this works for you, I'm quite confident it will since we use the same setup on the supabase cli

@kallebysantos
Copy link
Author

kallebysantos commented Jun 6, 2024

Just be more specific, you'll need to update this line on the supabase repo you have linked: https://github.com/supabase/supabase/blob/master/docker/volumes/api/kong.yml#L64

The host will be the host you are accessing Kong, in development will be localhost and the port will be the port you are accessing kong, ex: 5000

@kallebysantos let me know if this works for you, I'm quite confident it will since we use the same setup on the supabase cli

So I need to update both: storage and auth sections right?

I'd follow the other steps that you provide but it didn't work, I'll try to redo everything again in a new fresh installation of supabase.

Thank you, I'll let you know 🙏

I just don't have much time for now, a guy in my team decided to not move on with supabase in this project 😭😭 and I need to redo everything that I spend a whole month in another stack.

But this s3 feat will be very useful in other projects.

@fenos
Copy link
Contributor

fenos commented Jun 7, 2024

So I need to update both: storage and auth sections right?
only the storage one.

It will work since we use the same setup on the supabase cli.

I'll be closing this issue for now, however if you need need any more help on this, please feel free to comment below and I can re-open in case.

@fenos fenos closed this as completed Jun 7, 2024
@Obeyed
Copy link

Obeyed commented Jul 12, 2024

Hey @fenos

I'm self-hosting, and tried to follow the steps you wrote above. Am seeing the SignatureDoesNotMatch error:

{
    "raw": "{\"metadata\":{},\"code\":\"SignatureDoesNotMatch\",\"httpStatusCode\":403,\"userStatusCode\":400}",
    "name": "Error",
    "message": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
    "stack": "Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n    at Object.SignatureDoesNotMatch (/app/dist/internal/errors/codes.js:140:39)\n    at Object.<anonymous> (/app/dist/http/plugins/signature-v4.js:72:34)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
  }

Standard and Resumable uploads work.

At one point I was getting the following error:

Missing S3 Protocol Access Key ID or Secret Key Environment variables

But that seems to have been my mistake with restarting the containers properly, because I'm not seeing this anymore. Now I get the SignatureDoesNotMatch error.

Any pointers on what I could be doing wrong?

Is there anything more I can share that could help?

Storage config

These are the environment variables for the storage container:

STORAGE_BACKEND: s3

# Removing this seems to have no effect
# S3_PROTOCOL_PREFIX: /storage/v1
S3_PROTOCOL_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
S3_PROTOCOL_ACCESS_KEY_SECRET: ${AWS_SECRET_ACCESS_KEY}

AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}

AWS_DEFAULT_REGION: ${S3_REGION}
REGION: ${S3_REGION}

GLOBAL_S3_BUCKET: ${S3_GLOBAL_S3_BUCKET}

TENANT_ID: ${TENANT_ENVIRONMENT}
IS_MULTITENANT: "false"

TUS_URL_PATH: /upload/resumable
TUS_URL_EXPIRY_MS: 86400000
UPLOAD_SIGNED_URL_EXPIRATION_TIME: 86400000

Kong config

My kong config for the storage-v1 path is as follows:

Notice the /storage/v1 in "Forwarded: host=$(headers.host)/storage/v1;proto=http".

## Storage routes: the storage server manages its own auth
- name: storage-v1
  _comment: "Storage: /storage/v1/* -> http://storage:5000/*"
  connect_timeout: 60
  write_timeout: 3600
  read_timeout: 3600
  url: http://storage:5000/
  routes:
    - name: storage-v1-all
      strip_path: true
      paths:
        - /storage/v1/
      request_buffering: false
      response_buffering: false
  plugins:
    - name: cors
    - name: request-transformer
      config:
        add:
          headers:
            - "Forwarded: host=$(headers.host)/storage/v1;proto=http"

Python Boto3

Example of how I'm doing the upload from a client application with Python and boto3. Couldn't find anything about using Python in the docs, so lots of guessing and trying.

import os
import boto3
from botocore.config import Config

# kong gateway url
gateway_uri = os.getenv("GATEWAY_URI")

s3_client = boto3.client(
    "s3",
    region_name="eu-west-1",
    # assuming this requires the `/storage/v1' otherwise kong can't route it
    endpoint_url=f"{gateway_uri}/storage/v1/s3",
    aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
    aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
    # assuming this is how I'd set forcePathStyle as described in the docs
    config=Config(s3={"addressing_style": "path"}),
)

# then do an upload
s3_client.upload_file(
    Filename=file_path,
    Bucket=bucket_name,
    Key=remote_file_path,
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants