-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: data too long issue rendering #923
Comments
Thanks for submitting this issue! There are two things that cause this bug:
|
We used tar + gz to create some of these resources, and that might be necessary here... unless even that is too big. |
We have a couple options to choose from (thanks @varshaprasad96 and @komish! ):
|
For (2) we do that for the test registry:
|
This seems really strange to me that we are offloading CRD creation/management to Should we possibly look into a 4th option of managing the CRD's ourselves, or is that out of scope(effort/timeline)? |
Adding some thoughts based on initial findings and discussion with @itroyano: Option 3: Option 2: The reason it works in the e2e, I think, is because Kaniko uncompresses the tarball from its context. Based on a quick glance, when we provide a tarball as the context for Kaniko, it extracts the contents of the tarball and then proceeds to build the Docker image using the extracted files. Which means the manifests that are passed for the chart's creation are still uncompressed. The other option here, as alluded by @joelanford was to reimplement the whole secret driver - which is the code available here: https://github.com/helm/helm/blob/1a500d5625419a524fdae4b33de351cc4f58ec35/pkg/storage/driver/secrets.go. Re-implementing an additional compression, or even creating shards would probably take more effort for us as well as maintaining it could be an additional problem. Option 1: Helm by default does not manage the lifecycle of CRDs. If the CRDs are stored in a separate a. Handle CRDs on our own - with a kubectl apply Implementing (a) and (b) are synonymous imo. The only thing is - there shouldn't be an edge case, where CRDs themselves exceed the allowable size of Helm. I'm not sure if that is even a best practice (with other practical concerns that such huge CRDs have on performance, caching, (probably etcd limits) etc). Both of these methods - which make us manage CRDs separately than manifests, bring in two concerns:
Both options (2) and (3) (ie reimplementing secret driver or separating out CRDs in another chart) come with their own maintenance challenges. The decision is to choose the one that is easier for us to implement and manage. |
I tinkered on this today and came up with a custom driver in helm-operator-plugins that:
In theory, it could handle up to |
@DrummyFloyd I have #1057 up as a possible solution to this issue if you are interested in checking it out! |
│ Status: │
│ Conditions: │
│ Last Transition Time: 2024-07-17T21:28:50Z │
│ Message: resolved to "quay.io/operatorhubio/mariadb-operator@sha256:a96b0c89a9cfd307aee2e56b32ce51c2428e19b21e1dbe8f38386956ee73a618" │
│ Observed Generation: 1 │
│ Reason: Success │
│ Status: True │
│ Type: Resolved │
│ Last Transition Time: 2024-07-17T21:28:56Z │
│ Message: Instantiated bundle op-mariadb successfully │
│ Observed Generation: 1 │
│ Reason: Success │
│ Status: True │
│ Type: Installed │
│ Last Transition Time: 2024-07-17T21:28:39Z │
│ Message: │
│ Observed Generation: 1 │
│ Reason: Deprecated │
│ Status: False │
│ Type: Deprecated │
│ Last Transition Time: 2024-07-17T21:28:39Z │
│ Message: │
│ Observed Generation: 1 │
│ Reason: Deprecated │
│ Status: False │
│ Type: PackageDeprecated │
│ Last Transition Time: 2024-07-17T21:28:39Z │
│ Message: │
│ Observed Generation: 1 │
│ Reason: Deprecated │
│ Status: False │
│ Type: ChannelDeprecated │
│ Last Transition Time: 2024-07-17T21:28:39Z │
│ Message: │
│ Observed Generation: 1 │
│ Reason: Deprecated │
│ Status: False │
│ Type: BundleDeprecated │
│ Last Transition Time: 2024-07-17T21:28:54Z │
│ Message: unpack successful: │
│ Observed Generation: 1 │
│ Reason: UnpackSuccess │
│ Status: True │
│ Type: Unpacked │
│ Installed Bundle: │
│ Name: mariadb-operator.v0.29.0 │
│ Version: 0.29.0 │
│ Resolved Bundle: │
│ Name: mariadb-operator.v0.29.0 │
│ Version: 0.29.0 │
│ Events: <none> │
│ work =D EDIT: dunno, if this solution can also adresse error upon this kind of issue ?
if yes , do not hesitate to ask more context =) |
@DrummyFloyd this issue should be fixed in Do you have a reproducer for the other issue you ran into? If you're still having a problem there, could you open a new issue for that so we don't lose track? |
thank you for the previous Issue =) apiVersion: olm.operatorframework.io/v1alpha1
kind: ClusterExtension
metadata:
name: op-eso
spec:
installNamespace: operators
packageName: external-secrets-operator
version: 0.9.20
serviceAccount:
name: default
---
apiVersion: catalogd.operatorframework.io/v1alpha1
kind: ClusterCatalog
metadata:
name: operatorhubio
spec:
source:
type: image
image:
ref: quay.io/operatorhubio/catalog:latest
pollInterval: 24h |
User story
issue created , due to mention in slack
as a recurent tester on this project, ( because i like it)
i test upon some wanted operator on my stack.
atm i have some issue with following manifest with
0.10
OLMList if issues
error message rendered.
The text was updated successfully, but these errors were encountered: