Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-48495][SQL][DOCS] Describe shredding scheme for Variant #46831

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions common/variant/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -375,3 +375,7 @@ Field names are case-sensitive. Field names are required to be unique for each o
# Versions and extensions

An implementation is not expected to parse a Variant value whose metadata version is higher than the version supported by the implementation. However, new types may be added to the specification without incrementing the version ID. In such a situation, an implementation should be able to read the rest of the Variant value if desired.

# Shredding

For columnar storage formats, a single Variant object may have poor read performance when only a small number of fields are needed. A better approach is to create separate columns for individual fields, referred to as shredding or subcolumnarization. [shredding.md](shredding.md) describes an approach to shredding Variant columns in Parquet and similar columnar formats.
244 changes: 244 additions & 0 deletions common/variant/shredding.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,244 @@
# Shredding Overview

The Spark Variant type is designed to store and process semi-structured data efficiently, even with heterogeneous values. Query engines encode each variant value in a self-describing format, and store it as a group containing **value** and **metadata** binary fields in Parquet. Since data is often partially homogenous, it can be beneficial to extract certain fields into separate Parquet columns to further improve performance. We refer to this process as "shredding". Each Parquet file remains fully self-describing, with no additional metadata required to read or fully reconstruct the Variant data from the file. Combining shredding with a binary residual provides the flexibility to represent complex, evolving data with an unbounded number of unique fields while limiting the size of file schemas, and retaining the performance benefits of a columnar format.

This document focuses on the shredding semantics, Parquet representation, implications for readers and writers, as well as the Variant reconstruction. For now, it does not discuss which fields to shred, user-facing API changes, or any engine-specific considerations like how to use shredded columns. The approach builds on top of the generic Spark Variant representation, and leverages the existing Parquet specification for maximum compatibility with the open-source ecosystem.

At a high level, we replace the **value** and **metadata** of the Variant Parquet group with one or more fields called **object**, **array**, **typed_value** and **untyped_value**. These represent a fixed schema suitable for constructing the full Variant value for each row.

Shredding lets Spark (or any other query engine) reap the full benefits of Parquet's columnar representation, such as more compact data encoding, min/max statistics for data skipping, and I/O and CPU savings from pruning unnecessary fields not accessed by a query (including the non-shredded Variant binary data).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: and IO and CPU savings

If this was in the comma separated section earlier that would fit better

Without shredding, any query that accesses a Variant column must fetch all bytes of the full binary buffer. With shredding, we can get nearly equivalent performance as in a relational (scalar) data model.

For example, `select variant_get(variant_col, ‘$.field1.inner_field2’, ‘string’) from tbl` only needs to access `inner_field2`, and the file scan could avoid fetching the rest of the Variant value if this field was shredded into a separate column in the Parquet schema. Similarly, for the query `select * from tbl where variant_get(variant_col, ‘$.id’, ‘integer’) = 123`, the scan could first decode the shredded `id` column, and only fetch/decode the full Variant value for rows that pass the filter.

# Parquet Example

Consider the following Parquet schema together with how Variant values might be mapped to it. Notice that we represent each shredded field in **object** as a group of two fields, **typed_value** and **untyped_value**. We extract all homogenous data items of a certain path into **typed_value**, and set aside incompatible data items in **untyped_value**. Intuitively, incompatibilities within the same path may occur because we store the shredding schema per Parquet file, and each file can contain several row groups. Selecting a type for each field that is acceptable for all rows would be impractical because it would require buffering the contents of an entire file before writing.

Typically, the expectation is that **untyped_value** exists at every level as an option, along with one of **object**, **array** or **typed_value**. If the actual Variant value contains a type that does not match the provided schema, it is stored in **untyped_value**. An **untyped_value** may also be populated if an object can be partially represented: any fields that are present in the schema must be written to those fields, and any missing fields are written to **untyped_valud**.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo untyped_valud


```
optional group variant_col {
optional binary untyped_value;
optional group object {
optional group a {
optional binary untyped_value;
optional int64 typed_value;
}
optional group b {
optional binary untyped_value;
optional group object {
optional group c {
optional binary untyped_value;
optional binary typed_value (STRING);
}
}
}
}
}
```

| Variant Value | Top-level untyped_value | b.untyped_value | Non-null in a | Non-null in b.c |
|---------------|--------------------------|---------------|---------------|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unbalanced table - #47407

| {a: 123, b: {c: “hello”}} | null | null | typed_value | typed_value |
| {a: 1.23, b: {c: “123”}} | null | null | untyped_value | typed_value |
| {a: [1,2,3], b: {c: null}} | null | null | untyped_value | untyped_value |
| {a: 123, c: 456} | {c: 456} | null | typed_value | null |
| {a: 123, b: {c: "hello", d: 456}} | null | {d: 456} | typed_value | typed_value |
| [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | null | null | null |

# Parquet Layout

The **array** and **object** fields represent Variant array and object types, respectively. Arrays must use the three-level list structure described in https://github.com/apache/parquet-format/blob/master/LogicalTypes.md.

An **object** field must be a group. Each field name of this inner group corresponds to the Variant value's object field name. Each inner field's type is a recursively shredded variant value: that is, the fields of each object field must be one or more of **object**, **array**, **typed_value** or **untyped_value**.

Similarly the elements of an **array** must be a group containing one or more of **object**, **array**, **typed_value** or **untyped_value**.

Each leaf in the schema can store an arbitrary Variant value. It contains an **untyped_value** binary field and a **typed_value** field. If non-null, **untyped_value** represents the value stored as a Variant binary; the metadata and value of a normal Variant are concatenated. The **typed_value** field may be any type that has a corresponding Variant type. For each value in the data, at most one of the **typed_value** and **untyped_value** may be non-null. A writer may omit either field, which is equivalent to all rows being null.

| typed_value | untyped_value | Meaning |
|-------------|----------------|---------|
| null | null | Field is missing in the reconstructed Variant. |
| null | non-null | Field may be any type in the reconstructed Variant. |
| non-null | null | Field has this column’s type in the reconstructed Variant. |
| non-null | non-null | Invalid |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What should implementations do when they encounter this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failing seems like the right choice.

An alternative might be to prefer untyped_value over the others, which could allow an engine to write some redundant data into the typed column if it knew how to interpret it later. E.g. storing "123" in a string column in addition to keeping it as an integer in the untyped_value column. This seems like added confusion for marginal benefit, though.


The **typed_value** may be absent from the Parquet schema for any field, which is equivalent to its value being always null (in which case the shredded field is always stored as a Variant binary). By the same token, **untyped_value** may be absent, which is equivalent to their value being always null (in which case the field will always be missing or have the type of the **typed_value** column).

The full metadata and value can be reconstructed from **untyped_value** by treating the leading bytes as metadata, and using the header, dictionary size and final dictionary offset to determine the start of the Variant value section. (See the metadata description in the common/variant/README.md for more detail on how to interpret it.) For example, in the binary below, there is a one-element dictionary, and the final offset (`offset[1]`) indicates that the last dictionary entry ends at the second byte. Therefore the full metadata size is six bytes, and the rest is the value section of the Variant.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is confusing to me. I thought that each variant value (top-level) had to share metadata and it was not allowed to have embedded metadata. Can individual variant values within a variant have a new metadata?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The metadata is mainly the dictionary of all object keys. Once we shred, the keys that were associated with shredded fields are no longer needed (since they're stored as field names in the Parquet schema). The keys that are still needed (e.g. for an object that wasn't fully shredded) are stored as needed in the metadata of objects that use those keys. When reconstructing a full Variant value, the metadata will need to be rebuilt as the full value is built up from the shredded components.

I'm open to other ideas, but the main motivation for not leaving a single metadata at the top level is that it would need to be fetched any time we fetch an untyped_value column. In a situation where some portion of the original value has very irregular keys from row to row, this could be a large penalty.

The downsides I can see are:

  1. Needing to inspect and possibly rebuild each untyped_value when reconstructing the full Variant value. I think this is the biggest problem with this approach.
  2. Storing a small metadata next to every untyped_value adds overhead. Experimentally, this seems to be pretty small (~1%) after compression (I tested with snappy and zstd) if the metadata is regular.
  3. If the same key is used in different parts of the schema, they'll be duplicated. This doesn't seem likely to be a major issue, since the point of the metadata is to capture duplication from row to row, or in multiple array elements.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to add complexity to the format and cost. When writing, this must produce a filtered dictionary for each value. Those dictionaries may be overlapping and this could increase storage cost. And when reading the dictionaries would need to be merged. I think I prefer the simpler approach of keeping a single metadata/dictionary at the top level and reusing it to avoid all of the logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the downsides are relatively small compared to the benefit on the read side can be significant in cases where only some fields are filtered or projected. It's not uncommon to have irregular metadata, making the metadata column quite expensive to fetch. If we need to fetch and decode the entire metadata to decode any untyped_value, it would create a very large penalty to read any scalar field that does not perfectly shred to its target type.


```
hdr sz offset[0] offset[1] bytes[0] bytes[1] value
--------------------------------------------------------------------
| | | | | | |
| 0x01 | 0x01 | 0x00 | 0x02 | ‘h’ | ‘i’ | . . . . . . . .
|______|______|_________|_________|________|________|________________
```

# Unshredded values

If all values can be represented at a given level by whichever of **object**, **array** or **typed_value** is present, **untyped_value** is set to null.

If a value cannot be represented by whichever of **object**, **array** or **typed_value** is present in the schema, then it is stored in **untyped_value**, and the other fields are set to null. In the Parquet example above, if field **a** was an object or array, or a non-integer scalar, it would be stored in **untyped_value**.

If a value is an object, and the **object** field is present but does not contain all of the fields in the value, then any remaining fields are stored in an object in **untyped_value**. In the Parquet example above, if field **b** was an object of the form **{"c": 1, "d": 2}"**, then the object **{"d": 2}** would be stored in **untyped_value**, and the **c** field would be shredded recursively under **object.c**.

Note that an array is always fully shredded if there is an **array** field, so the above consideration for **object** is not relevant for arrays: only one of **array** or **untyped_value** may be non-null at a given level.

# Using untyped_value vs. typed_value

In general, it is desirable to store values in the **typed_value** field rather than the **untyped_value** whenever possible. This will typically improve encoding efficiency, and allow the use of Parquet statistics to filter at the row group or page level. In the best case, the **untyped_value** fields are all null and the engine does not need to read them (or it can omit them from the schema on write entirely). There are two main motivations for including the **untyped_value** column:

1) In a case where there are rare type mismatches (for example, a numeric field with rare strings like “n/a”), we allow the field to be shredded, which could still be a significant performance benefit compared to fetching and decoding the full value/metadata binary.
2) Since there is a single schema per file, there would be no easy way to recover from a type mismatch encountered late in a file write. Parquet files can be large, and buffering all file data before starting to write could be expensive. Including an untyped column for every field guarantees we can adhere to the requested shredding schema.

The **untyped_value** is stored in a single binary column, rather than storing the value and metadata separately as is done in the unshredded binary format. The motivation for storing them separately for unshredded data is that this lets the engine encode and compress the metadata more efficiently when the fields are consistent across rows. We chose to combine them in the shredded fields: we expect the encoding/compression benefit to be lower, since in the case of uniform data, the values should be stored in typed columns. Using a single binary reduces the extra metadata required in the Parquet schema, which can be significant in some cases.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The motivation for storing them separately for unshredded data is that this lets the engine encode and compress the metadata more efficiently when the fields are consistent across rows.

Hi @cashmand Is there any performance comparison for this part? We are considering introducing variants into Apache Paimon as a new type. If variant is a single binary column, it will be easier to integrate. If it is a group containing value and metadata, the integration cost will be higher, such as: dedicated Unsafe Row reader writer, dedicated ColumnVector for reader/writer, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Zouxxyy, sorry for the delayed response. I don't have specific numbers to share, but we did measure moderate file size benefit from keeping them separate. I think the two main benefits are:

  1. For data with a uniform schema, the metadata can be dictionary-encoded.
  2. Even if the metadata is less uniform, compression tends to be more effective when the value and metadata binaries are separate, since they don't tend to have much commonality.

The results are likely to be very data-dependent, though. It might also be sensitive to the compression codec used, which we didn't look at.


# Data Skipping

Shredded columns are expected to store statistics in the same format as a normal Parquet column. In general, the engine can only skip a row group or page if all rows in the **untyped_value** field are null, since it is possible for a `variant_get` expression to successfully cast a value from the **untyped_value** to the target type. For example, if **typed_value** is of type `int64`, then the string “123” might be contained in **untyped_value**, which would not be reflected in statistics, but could be retained by a filter like `where variant_get(col, “$.field”, “long”) = 123`. If **untyped_value** is all-null, then the engine can prune pages or row groups based on **typed_value**. This specification is not strict about what values may be stored in **untyped_value** rather than **typed_value**, so it is not safe to skip rows based on **typed_value** unless the corresponding **untyped_value** column is all-null, or the engine has specific knowledge of the behavior of the writer that produced the shredded data.

# Shredding Semantics

Variant defines a number of integer and decimal types of varying widths. When writing, it would be quite limiting to strictly enforce the mapping between Variant types and Parquet/Spark types. For example, if we chose to shred a field as `int64`, and encountered the value 123 encoded as `int32`, it seems preferable to write this to the **typed_value** column, even though it technically loses information about the type in the original Variant object, and would be reconstructed as an `int64`.

On the other hand, storing arbitrarily casted values in the **typed_value** column could create inconsistent behavior before and after shredding, and could leak behavior from the writing engine to the reading engine. For example, double-to-string casts can produce different results in different engines. Performing such a cast while shredding (even if we somehow retained the knowledge that the original value was a `double`) could result in confusing behavior changes if shredding took place using a different tool from the query engine that wrote it.

Our approach is a pragmatic compromise that allows the use of **typed_value** in cases where the type can be losslessly widened without resulting in a significant difference in the reconstructed Variant:
Copy link
Contributor

@emkornfield emkornfield Jul 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems an alternative here could be to have multiple typed values "typed_value_int32" "typed_value_int64", if necessary. This carries additional overhead, was that overhead deemed unacceptable or too complex? From an implementation standpoint, this seems the simplest conceptually with a lower potential of having details of the variant be lost (e.g. not being careful with a variant cast in one place, or requiring deducing which fields can be placed in the numeric column)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A case we were concerned about is ingesting JSON data, which is expected to be a very common use of Variant. In that situation, there can easily be a variety of decimal precisions and integer sizes for a given field. Adding a separate column for each type has two problems that I can see:

  1. It may add many sparse columns to the parquet schema, which can have performance implications.
  2. Any flow that tries to infer an appropriate parquet schema, without exhaustively sampling the entire file contents could easily miss a particular type, which would then need to be written to the untyped_value, making stats unusable for that column.

I think multiple typed values could be reasonable in some scenarios, but it would be good to clarify the use cases, and be confident that the benefit justifies the extra complexity.

Copy link
Contributor

@emkornfield emkornfield Jul 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main thing I was considering was preserving numeric types unambiguously but still have them shredded. After sleeping on it a better solution could be having an "original type column" which stores the type ID of the original numeric type that was shredded. This could be optional if all numeric types were of the same type as parquet schema

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I think this is my preferred solution to resolving the concerns that you and @rdblue raised.


1) All integer and decimal types in Variant are conceptually a single “number” type. The engine may shred any number into the **typed_value** of any other number, provided that no information about the value is lost. For example, the integer 123 may be shredded as Decimal<9, 2>, but 1.23 may not be shredded as any integer type.

2) To ensure that behavior remains unchanged before and after shredding, we will aim to have all Spark expressions that operate on Variant be agnostic to the specific numeric type. For example, `cast(val as string)` should produce the string “123” if `val` is any integer or decimal type that is exactly equal to 123. Note that this is unlike the normal Spark behavior for `decimal` types, which would produce “123.00” for `Decimal<9,2>`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that this is something you can guarantee across engines. I don't think the shredding can have an effect on how engines implement casting.


3) One exception to the above is `schema_of_variant`, which will still report the underlying physical type. This means that `schema_of_variant` may report different numeric types before and after shredding.

4) Other than integer and decimal, we will not allow casting between types. For example, we will not write the string “123” to an integer **typed_value** column, even though `variant_get(“123”, “$”, “integer”)` would produce the integer 123. Similarly, double and float types are considered distinct from other numeric types, and we would not write them to a numeric **typed_value** column.

# Reconstructing a Variant

It is possible to recover a full Variant value using a recursive algorithm, where the initial call is to `ConstructVariant` with the top-level fields, which are assumed to be null if they are not present in the schema.

```
# Constructs a Variant from `untyped_value`, `object`, `array` and `typed_value`.
# Only one of object, array and typed_value may be non-null.
def ConstructVariant(untyped_value, object, array, typed_value):
if object is null and array is null and typed_value is null: return untyped_value
elif object is not null:
return ConstructObject(untyped_value, object)
elif array is not null:
return ConstructArray(array)
else:
# Leaf in the tree.
assert(untyped_value is null or untyped_value is VariantNull)
return coalesce(untyped_value, cast(typed_value as Variant))

# Construct an object from an `object` group, and a (possibly null) Variant untyped_value
def ConstructObject(untyped_value, object)
# If untyped_value is present and is not an Object, then the result is ambiguous.
assert(untyped_value is null or is_object(untyped_value))
all_keys = Union(untyped_value.keys, object.fields)
return VariantObject(all_keys.map { key ->
if object[field] is null: (key, untyped_value[field])
else: (key, ConstructVariant(null, object[field], null, null))
}

def ConstructArray(array)
newVariantArray = VariantArray()
for i in range(array.size):
# Any of these may be missing from the schema, in which case they are null.
newVariantArray.append(ConstructVariant(array[i].untyped_value, array[i].object, array[i].array, array[i].typed_value)
```

# Nested Parquet Example

This section describes a more deeply nested example, using a top-level array as the shredding type.

Below is a sample of JSON that would be fully shredded in this example. It contains an array of objects, containing an “a” field shredded as an array, and a “b” field shredded as an integer.

```
[
{
"a": [1, 2, 3],
"b": 100
},
{
"a": [4, 5, 6],
"b": 200
}
]
```


The corresponding Parquet schema with “a” and “b” as leaf types is:

```
optional group variant_col {
optional binary untyped_value;
optional group array (LIST) {
repeated group list {
optional group element {
optional binary untyped_value;
optional group object {
optional group a {
optional binary untyped_value;
optional group array (LIST) {
repeated group list {
optional group element {
optional int64 typed_value;
optional binary untyped_value;
}
}
}
}
optional group b {
optional int64 typed_value;
optional binary untyped_value;
}
}
}
}
}
}
```

In the above example schema, if “a” is an array containing a mix of integer and non-integer values, the engine will shred individual elements appropriately into either **typed_value** or **untyped_value**.
If the top-level Variant is not an array (for example, an object), the engine cannot shred the value and it will store it in the top-level **untyped_value**.
Similarly, if "a" is not an array, it will be stored in the **untyped_value** under "a".

Consider the following example:

```
[
{
"a": [1, 2, 3],
"b": 100,
“c”: “unexpected”
},
{
"a": [4, 5, 6],
"b": 200
},
“not an object”
]
```

The second array element can be fully shredded, but the first and third cannot be. The contents of `variant_col.array[*].untyped_value` would be as follows:

```
[
{ “c”: “unexpected” },
NULL,
“not an object”
]
```

# Backward and forward compatibility

Shredding is an optional features of Variant, and readers must continue to be able to read a group containing only a `value` and `metadata` column.

We will follow the convention defined in https://github.com/delta-io/delta/blob/master/protocol_rfcs/variant-type.md#variant-data-in-parquet, and ignore any fields in the same group as typed_value/untyped_value that start with `_` (underscore).
This is intended to allow future backwards-compatible extensions. In particular, the field names `_metadata_key_paths` and any name starting with `_spark` are reserved, and should not be used by other implementations.
Any extra field names that do not start with an underscore should be assumed to be backwards incompatible, and readers should fail when reading such a schema.

Engines without shredding support are not expected to be able to read Parquet files that use shredding. Since different files may contain conflicting schemas (e.g. a `typed_value` column with incompatible types in two files), it may not be possible to infer or specify a single schema that would allow all Parquet files for a table to be read.