-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not null invariant incorrectly fails on non-nullable field inside a nullable struct #860
Comments
Hi @Kimahriman, thanks for bringing this to our attention. We will look into this. |
@brkyvz do you remember why we picked up this behavior? |
I believe this is the behavior that most databases follow. The idea was that if
|
Spark seems to allow for that behavior of foo being nullable but foo.bar not |
@Kimahriman This is done by Delta. I think the current behavior makes sense in the SQL world. Even if |
Yeah I'm just saying this is valid Spark behavior, so you can hit an exception for valid Spark code. Simplest reproduction:
|
And Spark correctly updates the nullability when it needs to:
|
@Kimahriman Sorry. I missed your latest update. Thanks for the example. Yep, Spark will update the nullability automatically. However, for Delta, if users set a not-null constraint, they likely will expect they should never see a null value when writing the code (such as write a UDF to handle a nested column value). Hence, we would like to keep the existing behavior. I'm going to close this as this is the intended behavior. Feel free to reopen this if you have more feedback 😄 |
That's the main issue, not-null constraints are automatically applied with no way to disable them. I have been slowly playing around with ways update the not-null constraint checking to operate in-line with how Spark works, haven't made too much progress on it though. At the very least it should be an opt-in thing to get around this. I still consider it a bug 😅 |
Hah, triggered a pipeline failure by commenting on a closed issue I guess |
We prefer to avoid creating non standard behaviors. Could you clarify what use cases that may require this non standard behavior? |
I would argue that the current behavior is non-standard and this ticket is to make it (spark) standard 😛 Basically what actually triggers this for us is a Scala UDF that takes a nullable string and returns struct. One of the fields in the struct is non-nullable, because it always has a value iff the string input is not null. The end result in Spark is a nullable struct with a non-nullable struct field |
Can you set the nested field nullable and add a check constraint such as |
Yeah there are workarounds, just less than ideal. You lose the 0.01% speedup you would get by avoiding null-checks on the field hah. I'm less concerned about enforcing the right not-null checks and just being able to opt-in to skip the check all together because I know the nullability is correct |
Adding a new behavior even if it's opt-in would also add extra work when we expend the not null constraint behaviors to other engines. And I feel it's not worth right now. I will keep this open to hear more voice. |
I was bitten by this today. I'm using Scala and have this model case class MyRow(id: Int, value: String)
case class Changes(before: Option[MyRow], after: MyRow) modelling changes between I can work around it but it feels like incorrect behaviour. |
Do we know how other engines like Presto/Trino handle nullability of structs? Because Delta essentially doesn't support struct nullability. One option would be when creating a table or new field in the schema to force everything under a nullable struct to be nullable as well. This seems like the behavior Delta expects. |
Yep. It would be great if anyone familiar with Presto/Trino can provide the feedback. |
My team continues to get bit by this repeatedly, and we have to find workarounds to force certain fields to be nullable even when they don't have to be, and I see more people bring this up (someone just did on Slack). Any reconsideration on respecting struct nullability? I still don't see how there's any valid way to consider that a non nullable field inside a nullable struct has to always be not null. That's an invalid interpretation of the struct being nullable. And it isn't even consistent with how Parquet files treat nullable structs. According to the user on slack at least Kafka interprets this correctly |
You'd be surprised. Typically, code will check a single pointer for null and that pointer might have been assigned to something that lives in the CPU's cache (ie, access to it will be super fast). Checking for null with big data generally pulls large amounts from RAM (relatively slow) and pushes anything out of the CPU cache. If you do this for a large amount of data, it starts to show in in terms of performance. |
This is a problem for my team as well and I think this example demonstrates it well. We are using Delta to transport data in Debezium CDC payloads. In that format, the top-level However, both those fields follow the same schema specification and, more importantly, the schema of the |
Ironically that's exactly how the Delta transaction log is stored as well. Based on the way Delta handles nullability currently, there is no way to actually express the equivalent of "if the action is |
Also FWIW this isn't an issue in delta-rs (they don't do any implicit null checking AFAIK). So you can write this "valid" data in other ways and then read in Spark, you just can't write it in Spark right now. |
I forgot to mention before, but using |
An existing test actually tests for what I think is incorrect behavior: https://github.com/delta-io/delta/blob/master/core/src/test/scala/org/apache/spark/sql/delta/schema/InvariantEnforcementSuite.scala#L186
Here the
top
struct is nullable but thekey
field is not, and the current invariant checks only care about the fact thatkey
is non-nullable therefore selecting that value (through a series ofGetStructField
's), will always not be null. However, it is valid fortop
to be null, and it's more accurate to say thatkey
is never null whentop
is not null I think.So in this test case, the first one is a valid test case.
top
is not null butkey
is. However, in the second test case,top
is null, which should be valid behavior and not throw an exception I believe.After looking through the code I can see a few ways to make it basically skip checking
key
in this case, but it might be more ideal but more complicated to have it checktop
first, and only if that's not null, then checkkey
and fail only in that case.The text was updated successfully, but these errors were encountered: