Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compactor is unable to delete data older than retention period #5058

Closed
sharathfeb12 opened this issue Jan 11, 2022 · 11 comments
Closed

Compactor is unable to delete data older than retention period #5058

sharathfeb12 opened this issue Jan 11, 2022 · 11 comments

Comments

@sharathfeb12
Copy link

sharathfeb12 commented Jan 11, 2022

I have been running Thanos compactor for weeks. However, I do not see the overall size of the GCS bucket reducing after the retention is turned on. Upon looking further, I see that the blocks which are beyond retention period do not have delete-mark.json added, which could be the reason why blocks are not being cleaned up.

Version: v0.24.0
Here is the config:

  containers:
  - args:
    - compact
    - --wait
    - --log.level=debug
    - --log.format=logfmt
    - --objstore.config=$(OBJSTORE_CONFIG)
    - --data-dir=/var/thanos/compact
    - --retention.resolution-raw=60d
    - --retention.resolution-5m=60d
    - --retention.resolution-1h=60d
    - --delete-delay=48h
    - --compact.cleanup-interval=5m
    - --compact.progress-interval=5m
    - --wait-interval=5m
    - --deduplication.replica-label=prometheus_replica

The number of pending blocks to be compacted, blocks to be downsized are also going up.

Screen Shot 2022-01-11 at 2 12 04 PM

@sharathfeb12 sharathfeb12 changed the title Compactor not deleted data older than retention period Compactor is unable to delete data older than retention period Jan 11, 2022
@yeya24
Copy link
Contributor

yeya24 commented Jan 11, 2022

Are you using the new compactor progress metrics now?

The configured retention is 60d. Are you sure that your blocks are older than 60d and need to be deleted?

For catching up the compaction, you can try to shard your compactor to do more work.
Another way is to stop your compactor first and then use tools bucket retention + clean command to delete all blocks that have exceeded their retentions first.

@zakiharis
Copy link

I have the same issue too. Just want to open new issue but I think put it here better.

my compactor config:

version: v0.22.0

- "compact"
- "--wait"
- "--wait-interval=30s"
- "--consistency-delay=0s"
- "--objstore.config-file=/etc/thanos/minio-bucket.yaml"
- "--http-address=0.0.0.0:19095"
- "--retention.resolution-raw=30d"
- "--retention.resolution-5m=60d"
- "--retention.resolution-1h=183d"
- "--data-dir=./data"
- "--compact.concurrency=4"
/etc/thanos # thanos tools bucket inspect --objstore.config-file=minio-bucket.yaml
level=info ts=2022-01-14T07:21:44.573340527Z caller=factory.go:46 msg="loading bucket configuration"
level=info ts=2022-01-14T07:21:46.993680972Z caller=fetcher.go:476 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.419397183s cached=482 returned=482 partial=0
|            ULID            |        FROM         |        UNTIL        |     RANGE      |   UNTIL-DOWN    |  #SERIES   |    #SAMPLES    |   #CHUNKS   | COMP-LEVEL | COMP-FAILED |        LABELS         | RESOLUTION |  SOURCE   |
|----------------------------|---------------------|---------------------|----------------|-----------------|------------|----------------|-------------|------------|-------------|-----------------------|------------|-----------|
| 01EXNQ3FF9QXFH64Q5N95MREMB | 29-01-2021 04:16:36 | 04-02-2021 00:00:00 | 139h43m23.927s | 100h16m36.073s  | 1,874,375  | 281,525,771    | 3,614,285   | 4          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01EZ42P79GEDV72AD32B9NPR2E | 04-02-2021 00:00:00 | 18-02-2021 00:00:00 | 335h59m59.992s | -95h59m59.992s  | 7,887,792  | 2,910,089,245  | 35,078,250  | 4          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01EZ43BK2G2548TKNZNTVRBY7V | 04-02-2021 00:00:00 | 18-02-2021 00:00:00 | 335h59m59.992s | -               | 7,887,792  | 259,332,142    | 10,082,886  | 4          | false       | cluster=ntt,replica=0 | 1h0m0s     | compactor |
| 01F0DDQWP0Y5JQ3166NF4MPT42 | 18-02-2021 00:00:00 | 04-03-2021 00:00:00 | 335h59m59.965s | -295h59m59.965s | 8,984,380  | 45,405,308,236 | 390,957,910 | 4          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F0DJ8EZ4KEJ4R10T7R64HRWB | 18-02-2021 00:00:00 | 04-03-2021 00:00:00 | 335h59m59.965s | -95h59m59.965s  | 8,984,303  | 3,027,357,692  | 30,156,545  | 4          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F0DMCB8TX0WVWHJQ1XJZCN3B | 18-02-2021 00:00:00 | 04-03-2021 00:00:00 | 335h59m59.965s | -               | 8,984,303  | 269,781,212    | 11,178,363  | 4          | false       | cluster=ntt,replica=0 | 1h0m0s     | compactor |
| 01F11W2MWPD77BDEYEDB0KBDCP | 04-03-2021 00:00:00 | 18-03-2021 00:00:00 | 335h59m59.866s | -295h59m59.866s | 10,133,079 | 47,138,923,187 | 407,780,910 | 4          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F11XQVKEVHBWS2EXM12GJ0RB | 04-03-2021 00:00:00 | 18-03-2021 00:00:00 | 335h59m59.866s | -95h59m59.866s  | 10,133,002 | 3,120,472,601  | 31,702,373  | 4          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F11ZSTT0YFJKAPVVS7N9FQ3X | 04-03-2021 00:00:00 | 18-03-2021 00:00:00 | 335h59m59.866s | -               | 10,133,002 | 280,559,697    | 12,348,190  | 4          | false       | cluster=ntt,replica=0 | 1h0m0s     | compactor |
| 01F170MSKXRFH6YJQMJ9CSPNG4 | 18-03-2021 00:00:00 | 20-03-2021 00:00:00 | 47h59m59.956s  | 192h0m0.044s    | 2,980,572  | 449,726,564    | 6,040,959   | 3          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F73PW94Q4N6QSK8ZQ297PWF3 | 18-03-2021 00:00:00 | 01-04-2021 00:00:00 | 335h59m59.956s | -295h59m59.956s | 10,940,580 | 46,712,961,695 | 412,802,076 | 4          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F1C5BRV0TZ1KMYNHP9KBFQKJ | 20-03-2021 00:00:00 | 22-03-2021 00:00:00 | 47h59m59.956s  | 192h0m0.044s    | 2,628,250  | 447,459,930    | 5,696,342   | 3          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F1M0YXYDCZNAKXXCSP835DBZ | 22-03-2021 00:00:00 | 24-03-2021 00:00:00 | 47h59m59.956s  | 192h0m0.044s    | 3,340,216  | 450,213,273    | 6,383,401   | 3          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F1P80V0ZTT7AFZ22N3TN01A0 | 24-03-2021 00:00:00 | 26-03-2021 00:00:00 | 48h0m0s        | 192h0m0s        | 3,342,679  | 441,987,685    | 6,328,270   | 3          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F1VKR4S24XSSC8T25NZPQHSB | 26-03-2021 00:00:00 | 28-03-2021 00:00:00 | 47h59m59.989s  | 192h0m0.011s    | 2,968,010  | 447,988,087    | 6,025,143   | 3          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F20RGRJRKBXMTW44FCCCRJR5 | 28-03-2021 00:00:00 | 30-03-2021 00:00:00 | 47h59m59.989s  | 192h0m0.011s    | 2,899,928  | 438,061,552    | 5,885,029   | 3          | false       | cluster=ntt,replica=0 | 5m0s       | compactor |
| 01F73S02A55GPNECMH1PVKG6QJ | 01-04-2021 00:00:00 | 15-04-2021 00:00:00 | 335h59m59.989s | -295h59m59.989s | 10,133,190 | 49,951,934,756 | 442,709,704 | 4          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F74KDTSAXHF11PKMMN0HR7TW | 15-04-2021 00:00:00 | 29-04-2021 00:00:00 | 335h59m59.899s | -295h59m59.899s | 11,205,013 | 50,858,073,517 | 447,416,174 | 4          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73FF8GX6VQHN1PNX175PJXC | 29-04-2021 00:00:00 | 01-05-2021 00:00:00 | 48h0m0s        | -8h0m0s         | 3,012,812  | 7,825,758,710  | 68,438,941  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01FS9RGPH0135W468KCAASQF36 | 29-04-2021 00:00:00 | 13-05-2021 00:00:00 | 336h0m0s       | -296h0m0s       | 10,948,178 | 55,606,717,655 | 486,404,121 | 4          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73FWDFTAKMREYXZ3WMRNBQE | 01-05-2021 00:00:00 | 03-05-2021 00:00:00 | 48h0m0s        | -8h0m0s         | 2,758,266  | 7,842,703,781  | 68,298,260  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73G9ES48F26XZKFJGKNVZ20 | 03-05-2021 00:00:00 | 05-05-2021 00:00:00 | 48h0m0s        | -8h0m0s         | 3,413,699  | 7,892,795,273  | 69,214,932  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73GQBS7A22VES3K855XA55M | 05-05-2021 00:00:00 | 07-05-2021 00:00:00 | 47h59m59.899s  | -7h59m59.899s   | 3,290,991  | 7,974,969,039  | 69,855,932  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73H56R4NG6J0RV2V98W58N4 | 07-05-2021 00:00:00 | 09-05-2021 00:00:00 | 47h59m59.899s  | -7h59m59.899s   | 2,981,932  | 8,005,123,123  | 69,992,413  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73HJM7CHMJP0XXWZJ84WRBE | 09-05-2021 00:00:00 | 11-05-2021 00:00:00 | 47h59m59.899s  | -7h59m59.899s   | 3,185,908  | 8,015,823,005  | 70,075,697  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73J09WVDPFTPTMAQMZ4EYE8 | 11-05-2021 00:00:00 | 13-05-2021 00:00:00 | 47h59m59.899s  | -7h59m59.899s   | 3,220,234  | 8,049,544,724  | 70,527,946  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73JE18WMSH5CX0A4601JKJ4 | 13-05-2021 00:00:00 | 15-05-2021 00:00:00 | 48h0m0s        | -8h0m0s         | 2,815,530  | 8,014,394,090  | 70,040,773  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01FS9VCEP58CV6ZTNTT75DEABS | 13-05-2021 00:00:00 | 27-05-2021 00:00:00 | 336h0m0s       | -296h0m0s       | 11,590,819 | 56,201,875,191 | 491,146,188 | 4          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73JVBNKQKFRXSCH660ZXEED | 15-05-2021 00:00:00 | 17-05-2021 00:00:00 | 47h59m59.899s  | -7h59m59.899s   | 2,758,842  | 8,019,220,633  | 70,163,732  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73K8RXHG6JJ372HASKH894M | 17-05-2021 00:00:00 | 19-05-2021 00:00:00 | 48h0m0s        | -8h0m0s         | 3,412,661  | 8,077,460,736  | 71,174,293  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73KPRVG2NQ9SXN588Y3KQYC | 19-05-2021 00:00:00 | 21-05-2021 00:00:00 | 47h59m59.899s  | -7h59m59.899s   | 3,426,277  | 8,017,315,423  | 70,305,568  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |
| 01F73M8A5VF4E94RAEB7TM0HKG | 21-05-2021 00:00:00 | 23-05-2021 00:00:00 | 48h0m0s        | -8h0m0s         | 3,090,970  | 8,089,286,747  | 70,409,402  | 3          | false       | cluster=ntt,replica=0 | 0s         | compactor |

How I'm suppose to delete the old data?

@yeya24
Copy link
Contributor

yeya24 commented Jan 14, 2022

@zakiharis Please see my comment above.
I will add a more detailed doc about solving ways to solve compactor issue soon.

@zakiharis
Copy link

@yeya24

unfortunately I tried your suggestion but still it didn't delete the old blocks

/etc/thanos # thanos tools bucket retention --objstore.config-file=minio-bucket.yaml
level=info ts=2022-01-14T07:45:30.691639227Z caller=factory.go:46 msg="loading bucket configuration"
level=info ts=2022-01-14T07:45:30.692081705Z caller=tools_bucket.go:1094 msg="syncing blocks metadata"
level=info ts=2022-01-14T07:45:33.433659053Z caller=fetcher.go:476 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.74153538s cached=482 returned=139 partial=0
level=info ts=2022-01-14T07:45:33.433743064Z caller=tools_bucket.go:1099 msg="synced blocks done"
level=warn ts=2022-01-14T07:45:33.433759407Z caller=tools_bucket.go:1101 msg="GLOBAL COMPACTOR SHOULD __NOT__ BE RUNNING ON THE SAME BUCKET"
level=info ts=2022-01-14T07:45:33.434885035Z caller=retention.go:31 msg="start optional retention"
level=info ts=2022-01-14T07:45:33.434941113Z caller=retention.go:46 msg="optional retention apply done"
level=info ts=2022-01-14T07:45:33.435153366Z caller=main.go:159 msg=exiting
/etc/thanos #
/etc/thanos # thanos tools bucket cleanup --objstore.config-file=minio-bucket.yaml
level=info ts=2022-01-14T07:52:22.191545458Z caller=factory.go:46 msg="loading bucket configuration"
level=info ts=2022-01-14T07:52:22.193539596Z caller=tools_bucket.go:609 msg="syncing blocks metadata"
level=info ts=2022-01-14T07:52:27.067121245Z caller=fetcher.go:476 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=4.87353279s cached=482 returned=139 partial=0
level=info ts=2022-01-14T07:52:27.067186438Z caller=tools_bucket.go:614 msg="synced blocks done"
level=info ts=2022-01-14T07:52:27.067197445Z caller=clean.go:33 msg="started cleaning of aborted partial uploads"
level=info ts=2022-01-14T07:52:27.067211369Z caller=clean.go:60 msg="cleaning of aborted partial uploads done"
level=info ts=2022-01-14T07:52:27.067217413Z caller=blocks_cleaner.go:43 msg="started cleaning of blocks marked for deletion"
level=info ts=2022-01-14T07:52:27.067287327Z caller=blocks_cleaner.go:57 msg="cleaning of blocks marked for deletion done"
level=info ts=2022-01-14T07:52:27.06729286Z caller=tools_bucket.go:621 msg="cleanup done"
level=info ts=2022-01-14T07:52:27.067524643Z caller=main.go:159 msg=exiting

also, I tried bucket mark and cleanup command still same result

/etc/thanos # thanos tools bucket mark --id=01EXNQ3FF9QXFH64Q5N95MREMB --details=DELETE --marker=deletion-mark.json --objstore.config-file=minio-bucket.yaml

after run the cleanup command, the block still there

[root@docker 01EXNQ3FF9QXFH64Q5N95MREMB]# ls
chunks  deletion-mark.json  index  meta.json

@yeya24
Copy link
Contributor

yeya24 commented Jan 14, 2022

@yeya24

unfortunately I tried your suggestion but still it didn't delete the old blocks


/etc/thanos # thanos tools bucket retention --objstore.config-file=minio-bucket.yaml

level=info ts=2022-01-14T07:45:30.691639227Z caller=factory.go:46 msg="loading bucket configuration"

level=info ts=2022-01-14T07:45:30.692081705Z caller=tools_bucket.go:1094 msg="syncing blocks metadata"

level=info ts=2022-01-14T07:45:33.433659053Z caller=fetcher.go:476 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=2.74153538s cached=482 returned=139 partial=0

level=info ts=2022-01-14T07:45:33.433743064Z caller=tools_bucket.go:1099 msg="synced blocks done"

level=warn ts=2022-01-14T07:45:33.433759407Z caller=tools_bucket.go:1101 msg="GLOBAL COMPACTOR SHOULD __NOT__ BE RUNNING ON THE SAME BUCKET"

level=info ts=2022-01-14T07:45:33.434885035Z caller=retention.go:31 msg="start optional retention"

level=info ts=2022-01-14T07:45:33.434941113Z caller=retention.go:46 msg="optional retention apply done"

level=info ts=2022-01-14T07:45:33.435153366Z caller=main.go:159 msg=exiting

/etc/thanos #

/etc/thanos # thanos tools bucket cleanup --objstore.config-file=minio-bucket.yaml

level=info ts=2022-01-14T07:52:22.191545458Z caller=factory.go:46 msg="loading bucket configuration"

level=info ts=2022-01-14T07:52:22.193539596Z caller=tools_bucket.go:609 msg="syncing blocks metadata"

level=info ts=2022-01-14T07:52:27.067121245Z caller=fetcher.go:476 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=4.87353279s cached=482 returned=139 partial=0

level=info ts=2022-01-14T07:52:27.067186438Z caller=tools_bucket.go:614 msg="synced blocks done"

level=info ts=2022-01-14T07:52:27.067197445Z caller=clean.go:33 msg="started cleaning of aborted partial uploads"

level=info ts=2022-01-14T07:52:27.067211369Z caller=clean.go:60 msg="cleaning of aborted partial uploads done"

level=info ts=2022-01-14T07:52:27.067217413Z caller=blocks_cleaner.go:43 msg="started cleaning of blocks marked for deletion"

level=info ts=2022-01-14T07:52:27.067287327Z caller=blocks_cleaner.go:57 msg="cleaning of blocks marked for deletion done"

level=info ts=2022-01-14T07:52:27.06729286Z caller=tools_bucket.go:621 msg="cleanup done"

level=info ts=2022-01-14T07:52:27.067524643Z caller=main.go:159 msg=exiting

also, I tried bucket mark and cleanup command still same result


/etc/thanos # thanos tools bucket mark --id=01EXNQ3FF9QXFH64Q5N95MREMB --details=DELETE --marker=deletion-mark.json --objstore.config-file=minio-bucket.yaml

after run the cleanup command, the block still there


[root@docker 01EXNQ3FF9QXFH64Q5N95MREMB]# ls

chunks  deletion-mark.json  index  meta.json

You should set the retention period to delete and then run the cleanup command to delete the old blocks.

If you want to use the cleanup command, you need to also configure the delete delay flag. The delete delay duration is your period that your blocks will only be deleted after that period. You can set it to 0 to delete blocks directly.

@zakiharis
Copy link

@yeya24 thanks for the info on set delete delay to 0

I will add a more detailed doc about solving ways to solve compactor issue soon.

please add this ya. thank you so much!

@yeya24
Copy link
Contributor

yeya24 commented Mar 5, 2022

https://thanos.io/tip/operating/compactor-backlog.md/ Please take a look at this guide about how to troubleshoot this kind of issue. Will close for now and feel free to reopen it if this doesn't work for you

@yeya24 yeya24 closed this as completed Mar 5, 2022
@adit-cmd
Copy link

adit-cmd commented Aug 19, 2024

Hi @yeya24 . I thought of opening a new issue but I am facing the exact issue described here and I followed your instructions mentioned here https://thanos.io/tip/operating/compactor-backlog.md/ to mark and clean up old blocks from the data bucket (S3 in my case).

I stopped the compact pod (scaled down the sts replica to 0) and ran the following commands from the thanos-store pod.

output of retention command:

/ $ thanos tools bucket retention --objstore.config-file /tmp/obj_store.yaml

  • ts=2024-08-19T21:31:09.479116477Z caller=factory.go:53 level=info msg="loading bucket configuration"
  • ts=2024-08-19T21:31:09.479608392Z caller=tools_bucket.go:1423 level=info msg="syncing blocks metadata"
  • ts=2024-08-19T21:32:06.157314962Z caller=fetcher.go:626 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=56.677680277s duration_ms=56677 cached=17239 returned=17202 partial=0
  • ts=2024-08-19T21:32:06.157372346Z caller=tools_bucket.go:1428 level=info msg="synced blocks done"
  • ts=2024-08-19T21:32:06.157390024Z caller=tools_bucket.go:1430 level=warn msg="GLOBAL COMPACTOR SHOULD NOT BE RUNNING ON THE SAME BUCKET"
  • ts=2024-08-19T21:32:06.15936227Z caller=retention.go:32 level=info msg="start optional retention"
  • ts=2024-08-19T21:32:06.160657863Z caller=retention.go:47 level=info msg="optional retention apply done"
  • ts=2024-08-19T21:32:06.160872319Z caller=main.go:174 level=info msg=exiting

After this I was expecting blocks which are older than retention to be marked for deletion. I was expecting those blocks would have "deletion-mark.json" would be uploaded on blocks which are older than retention.

However I do not see the deletion marker in those blocks.

When I ran the bucket cleanup command with delete-delay 0, I could see only blocks which were marked for deletion recently, only got deleted.

The configured retentions are as follows:

        - --retention.resolution-raw=30d
        - --retention.resolution-5m=120d
        - --retention.resolution-1h=1y
  1. However as per my understand if we use delete-delay 0h, it will immediately delete the blocks irrespective of the retentions set.
  2. Also im not being able to understand why no deletion marker was uploaded in those old blocks.

I am pretty new to thanos and would highly appreciate a response and any insight that you might have on this. Thanks in advance for all your help.

@yeya24
Copy link
Contributor

yeya24 commented Nov 12, 2024

Hey sorry for the late reply @adit-cmd,

However as per my understand if we use delete-delay 0h, it will immediately delete the blocks irrespective of the retentions set.

Umm that's bizarre. The command should delete all blocks which are marked for deletion. If you don't have the deletion marker for a block then it won't be deleted.

Also im not being able to understand why no deletion marker was uploaded in those old blocks.

I saw you tried thanos tools bucket retention command. Did you specify any configuration to specify the retention period? If you have retention period specified correctly then it should add markers as expected.

@ashishvaishno
Copy link

@yeya24 I also had the same issue, I will close the question #7983
How can I avoid this issue or this manual execution ?

@adit-cmd
Copy link

Hi @yeya24 .. Apologies for the late response. Let me summarize the steps I took to resolve the issue.

We have been using thanos since 2020. However due to some issue ( mostly on the thanos end - Im still not sure of the root cause), the index files went missing from the ULID blocks for the year 2021,2022. Since index file was missing, compactor was unable to delete those older ULID blocks. Whenever the compactor was encountering such a block, it was failing with the following error:

Pasted Graphic 7

If you observe the screenshot, we can see the error where ->> ULID_BLOCK/index was not found. [ reason="error executing compaction..." The specified key does not exist"

The manual solution that is provided in official thanos documentation which I tried above only worked for the recently created ULID blocks but it did not work for the ULID blocks created back on 2021/2022.

So in order to get the compactor work as expected, I created a S3 policy which would delete all the historic ULID blocks from S3 keeping only the one's created one year back. Once the 2021/2022 blocks were deleted, compactor started working as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants