-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Client fails to create a new Pod in native executable because Pod Overhead set without corresponding RuntimeClass defined Overhead.
#39934
Comments
cc @manusa |
Hmmm, not sure how this is possible. The only way |
I'm afraid I have no idea how to do any of those. |
@scholzj I think there's something else going on here. I quickly tried a Quarkus JVM application that invoked this method:
And By the sound of the message Can you create a self contained reproducer? Maybe one that doesn't require Kubernetes itself? |
Have the same problem here, a simple pod create will add
to the spec. Code:
{
},
}
|
Add The same with objects:
|
@galderz This should help: https://github.com/scholzj/quarkus-39934-reproducer (the instructions are in the README) |
Thanks @scholzj, I was eventually able to reproduce it. The issue appears to be a pojo to JSON serialization issue: Looking at hotspot, the {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "nginx",
"namespace": "myproject"
},
"spec": {
"containers": [
{
"image": "nginx:1.14.2",
"name": "nginx",
"ports": [
{
"containerPort": 80
}
]
}
]
}
} As shown by:
However, in native the following JSON is generated: {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {},
"finalizers": [],
"labels": {},
"managedFields": [],
"name": "nginx",
"namespace": "myproject",
"ownerReferences": []
},
"spec": {
"containers": [
{
"args": [],
"command": [],
"env": [],
"envFrom": [],
"image": "nginx:1.14.2",
"name": "nginx",
"ports": [
{
"containerPort": 80
}
],
"resizePolicy": [],
"volumeDevices": [],
"volumeMounts": []
}
],
"ephemeralContainers": [],
"hostAliases": [],
"imagePullSecrets": [],
"initContainers": [],
"nodeSelector": {},
"overhead": {},
"readinessGates": [],
"resourceClaims": [],
"schedulingGates": [],
"tolerations": [],
"topologySpreadConstraints": [],
"volumes": []
}
} E.g.
Not only we can see the This issue is present as far back as 3.7.4. In 3.6.9 it worked fine (others versions in between could be fine). The Fabric8 version in 3.7.4 was 6.10.0 and in 3.6.9 was 6.9.2. I remember there were some significant issues with this update. Anything in the update that can explain this @iocanel @metacosm @manusa @shawkins? |
Nulling {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {},
"finalizers": [],
"labels": {},
"managedFields": [],
"name": "nginx",
"namespace": "myproject",
"ownerReferences": []
},
"spec": {
"containers": [
{
"args": [],
"command": [],
"env": [],
"envFrom": [],
"image": "nginx:1.14.2",
"name": "nginx",
"ports": [
{
"containerPort": 80
}
],
"resizePolicy": [],
"volumeDevices": [],
"volumeMounts": []
}
],
"ephemeralContainers": [],
"hostAliases": [],
"imagePullSecrets": [],
"initContainers": [],
"nodeSelector": {},
"readinessGates": [],
"resourceClaims": [],
"schedulingGates": [],
"tolerations": [],
"topologySpreadConstraints": [],
"volumes": []
}
} |
The Fabric8-generated builders always initialize the List fields.
However, the generated type Map fields include the This has been the standard Fabric8 Kubernetes Client behavior since 6.0.0 (almost 2 years now). It seems that the annotation is not respected in native mode. |
@manusa I'll try to git bisect it and see if I can get more clues. |
I did a bisect on this. The last version where this worked was 3.7.3 and it broke in 3.7.4. Looks like it's an unintentional side effect of #38683. Commit issue is 943784b:
@zakkak What shall we do about this? |
@manusa Are there any kubernetes serialization tests in Quarkus that can help detect issues like this, which do not require starting up a Kubernetes clusters? If there are none, can you help craft one? |
This is surprising, I'm not sure how this might be affecting the way the objects are serialized.
I'm not sure, I'd need to check.
Sure, I would be happy to. |
Now that fabric8io/kubernetes-client#5759 is in (since 6.11.0 (2024-03-25) which Quarkus updated to in b89706d), maybe we could try reverting the change, although I would be surprised if it changed anything (due to fabric8io/kubernetes-client#5759) |
Any update on this? |
Tell me more! 😉 |
LOL, it's right in the description:
|
Ouch, but that is specific to the PodBuilder, and as you are aware many other fields get serialized (#39934 (comment)), this might affect other things too, not only a pod. |
Sorry for the delay, I expect to be able to work and complete this in a few days 😓. |
Hi, @galderz I created #41274 that should ensure this isn't happening either on JVM or Native. I ran the integration tests (IT) locally in both modes and I wasn't able to reproduce the issue, so maybe I missed something. |
I've made some progress with this, but still needs further work: What I can see is that for some yet unknown reason, the Strangely, the other annotation in the field, e.g. The final piece of the puzzle is how all of this relates to 943784b. |
One additional important detail that explains the differences in
The difference is that when JVM mode gets the
Maybe there's some union of annotations happening somewhere? In native though, native also locates the annotated method but it only finds the annotations present in the method itself. The kind of "union" of annotations does seems to happen in JVM mode is not working there. |
Or maybe this is just a bug in the kubernetes client that has gone unnoticed until now: Why would you have a field and its getter annotated differently? |
The reason this seems to work in JVM mode is that jackson takes field and getter methods, and these are meged into an annotation map. Don't know yet why this doesn't work in native, but seems sensible to me to have both fields and getters having the same annotations, or remove either one of field or getter annotations, so that you don't have duplication. |
@galderz I don't think that is intentional. The generation logic has been updated incrementally over the years - as long as things worked in jvm mode it wouldn't have been noticed that the property and the getter have different annotations. Were you able to test this scenario with the pod and related classes modified to confirm this is the problem? @manusa if this is the cause, I can take a pass over the generation logic and see what can be minimized. Not sure if the java generator is also affected. |
I'm still puzzled because the test I implemented does work on Native and JVM. So I'm not sure if this union is non-deterministic or if there's something else going on.
I don't think it is either. The model generation has undergone multiple iterations over the last couple of (now probably 8-9) years and redundant annotations such as these might have come unnoticed since this is generated code and tests continue to pass.
Thanks @shawkins, that would be great. |
I created fabric8io/kubernetes-client#6086 which should address the duplicate annotation issue. You can check the first commit fabric8io/kubernetes-client@f5c1aa8 to see the relevant changes. |
After a few test failures, it looks like annotating only fields won't work. Regarding @galderz question
This is common in Jackson, you might want to serialize a field twice using different names/properties. Or you might want to be able to only deserialize a field (e.g. password in a POJO), so you just annotate the setter. You can see how this is more or less generalized by checking the original Jackson2Annotator which will provide the annotations for both the field and the getter/setter. I can try to work more on our annotator to create the same annotations for the method and the field, but that's as far as we can go (I think). However, I still think that the generated native image has some flaw as compared to its behavior in JVM mode. Or maybe I'm missing something else. |
There is something else going on, this is the minimal reproducer I could get: //usr/bin/env jbang "$0" "$@" ; exit $?
//DEPS io.quarkus.platform:quarkus-bom:3.12.0@pom
//DEPS io.quarkus:quarkus-kubernetes-client
//JAVAC_OPTIONS -parameters
//JAVA_OPTIONS -Djava.util.logging.manager=org.jboss.logmanager.LogManager
import java.util.Map;
import com.fasterxml.jackson.annotation.JsonInclude;
import com.fasterxml.jackson.annotation.JsonPropertyOrder;
import com.fasterxml.jackson.databind.annotation.JsonDeserialize;
import io.fabric8.kubernetes.api.model.ObjectMeta;
import io.fabric8.kubernetes.api.model.Pod;
import io.fabric8.kubernetes.api.model.PodBuilder;
import io.fabric8.kubernetes.api.model.PodSpec;
import io.fabric8.kubernetes.api.model.PodStatus;
import io.fabric8.kubernetes.client.utils.KubernetesSerialization;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import jakarta.inject.Inject;
@QuarkusMain
public class Command implements QuarkusApplication {
@Inject
KubernetesSerialization kubeser;
@Override
public int run(String... args) throws Exception {
Pod pod = new PodBuilder()
.withNewMetadata()
.withName("nginx")
.withLabels(Map.of("app", "nginx"))
.withNamespace("default")
.endMetadata()
.build();
System.out.println(kubeser.asYaml(pod));
return 0;
}
// @JsonDeserialize(using = com.fasterxml.jackson.databind.JsonDeserializer.None.class)
@JsonInclude(JsonInclude.Include.NON_NULL)
@JsonPropertyOrder({
"apiVersion",
"kind",
"metadata",
"spec",
"status"
})
private static final class Demo {
String apiVersion = "v1";
String kind = "Demo";
ObjectMeta metadata;
PodSpec spec;
PodStatus status;
}
} Note the commented The native-image analysis:
vs
So, it remains a mystery. |
This comment was marked as outdated.
This comment was marked as outdated.
Yeah it does. So far I only see side effects but I don't yet have the exact cause yet. More to come |
No. I think we need to understand the exact link between #38886 and this issue before thinking about the possibility of reverting it. Currently my theory is that due to reflection registration being missing, things have got out of shape, but bringing reflection registration back in full goes back to the issues that #38886 was trying to solve. So you really need to understand all the pieces before reverting or bringing anything back. |
This might be fixed with fabric8io/kubernetes-client#6101 but that would require a new release of Kubernetes Client. |
That would be fixing a symptom but as hinted above, there is an underlying issue here that goes beyond this. The link between #38886 and this particular issue is that it causes field reflection for some (or all?) fabric8 types to stop working. Jackson's databind annotated field/method logic relies on reflection to discover annotated fields and methods (NOTE: I don't think this should really be needed, Jandex has all the annotated field/method information without having to rely on reflection @dmlloyd et al?). E.g.
In the above snippet, the declared fields is empty from 3.7.4 onwards and this causes the issue here. Since no field annotations are found, only what is annotated in methods is discovered, which is why This class reproduces the issue by looking up the declared fields and methods of a FQN passed in as parameter. If you pass in
If you do the same with 3.7.4:
If you set the same annotations in methods as in fields you will get around this particular issue, but there could be other annotation functionality that only checks fields, in which case it would be broken. The proper fix IMO would be to stop using reflection for discovering annotations. This would reduce binary executables, make native image builds faster and you would avoid issues like this because the jandex index has everything. This concludes my investigation as far as this issue is concerned. More investigation could be carried out to find out why #38886 causes field reflection to stop working. The reproducer I pointed above could be used to do such investigation, but that goes beyond the scope of this issue /cc @zakkak |
Awesome work and investigation, Galder 🙌 Just for the record, ValidationSchema and KubeSchema classes will be gone in Kubernetes Client 7.0.0 Before #38886, every Kubernetes Client model type was registered for reflection because both KubeSchema and ValidationSchema where inherently linking every single model class. I think a new issue should be opened to track (and fix) why the AnnotatedFieldCollector is not working as expected. Probably the traversal process is broken at some point. |
Ok, based on the analysis of declared fields, the issue could be lying here: Lines 183 to 187 in 265b721
For some reason (maybe intentionally or not), the model is not registering the fields, so, maybe the logic in I have opened a draft PR #42028 with the fix so it can be evaluated that works. BTW, with this change, there seems to be no performance penalty, nor size increase on the final native image. |
Describe the bug
I have an application that creates a Pod in Kubernetes cluster. The Pod is very simple, contains only a volume with mounted ConfigMap and a single container. It works fine when I run it with
mvn quarkus:dev
. But fails after native compilation with the following error:This seems to be caused because the native executable sets the
.spec.overhead
fiels to an ampty object ({}
) instead of keeping itnull
. When I explicitely set it tonull
:It seems to work fine.
Expected behavior
The
.spec.overhead
field is set tonull
even without it being done in thePodBuilder
and the application is able to create the new Pod even when compiled to a native executable.Actual behavior
A native executable fails with the error described above.
How to Reproduce?
client.pods().inNamespace(namespace).resource(new PodBuilder()...build()).create();
The code where I run into this error can be found here: https://github.com/scholzj/kekspose/blob/main/src/main/java/cz/scholz/kekspose/Proxy.java#L81
Output of
uname -a
orver
Darwin Jakubs-MBP 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000 arm64
Output of
java -version
openjdk version "21.0.2" 2024-01-16 OpenJDK Runtime Environment GraalVM CE 21.0.2+13.1 (build 21.0.2+13-jvmci-23.1-b30) OpenJDK 64-Bit Server VM GraalVM CE 21.0.2+13.1 (build 21.0.2+13-jvmci-23.1-b30, mixed mode, sharing)
Mandrel or GraalVM version (if different from Java)
No response
Quarkus version or git rev
3.9.2
Build tool (ie. output of
mvnw --version
orgradlew --version
)Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
Additional information
No response
The text was updated successfully, but these errors were encountered: