emkornfield commented on code in PR #461:
URL: https://github.com/apache/parquet-format/pull/461#discussion_r1881108489


##########
VariantShredding.md:
##########
@@ -25,290 +25,320 @@
 The Variant type is designed to store and process semi-structured data 
efficiently, even with heterogeneous values.
 Query engines encode each Variant value in a self-describing format, and store 
it as a group containing `value` and `metadata` binary fields in Parquet.
 Since data is often partially homogenous, it can be beneficial to extract 
certain fields into separate Parquet columns to further improve performance.
-We refer to this process as **shredding**.
-Each Parquet file remains fully self-describing, with no additional metadata 
required to read or fully reconstruct the Variant data from the file.
-Combining shredding with a binary residual provides the flexibility to 
represent complex, evolving data with an unbounded number of unique fields 
while limiting the size of file schemas, and retaining the performance benefits 
of a columnar format.
+This process is **shredding**.
 
-This document focuses on the shredding semantics, Parquet representation, 
implications for readers and writers, as well as the Variant reconstruction.
-For now, it does not discuss which fields to shred, user-facing API changes, 
or any engine-specific considerations like how to use shredded columns.
-The approach builds upon the [Variant Binary Encoding](VariantEncoding.md), 
and leverages the existing Parquet specification.
+Shredding enables the use of Parquet's columnar representation for more 
compact data encoding, column statistics for data skipping, and partial 
projections.
 
-At a high level, we replace the `value` field of the Variant Parquet group 
with one or more fields called `object`, `array`, `typed_value`, and 
`variant_value`.
-These represent a fixed schema suitable for constructing the full Variant 
value for each row.
+For example, the query `SELECT variant_get(event, '$.event_ts', 'timestamp') 
FROM tbl` only needs to load field `event_ts`, and if that column is shredded, 
it can be read by columnar projection without reading or deserializing the rest 
of the `event` Variant.
+Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event, 
'$.event_type', 'string') = 'signup'`, the `event_type` shredded column 
metadata can be used for skipping and to lazily load the rest of the Variant.
 
-Shredding allows a query engine to reap the full benefits of Parquet's 
columnar representation, such as more compact data encoding, min/max statistics 
for data skipping, and I/O and CPU savings from pruning unnecessary fields not 
accessed by a query (including the non-shredded Variant binary data).
-Without shredding, any query that accesses a Variant column must fetch all 
bytes of the full binary buffer.
-With shredding, we can get nearly equivalent performance as in a relational 
(scalar) data model.
+## Variant Metadata
 
-For example, `select variant_get(variant_col, ‘$.field1.inner_field2’, 
‘string’) from tbl` only needs to access `inner_field2`, and the file scan 
could avoid fetching the rest of the Variant value if this field was shredded 
into a separate column in the Parquet schema.
-Similarly, for the query `select * from tbl where variant_get(variant_col, 
‘$.id’, ‘integer’) = 123`, the scan could first decode the shredded `id` 
column, and only fetch/decode the full Variant value for rows that pass the 
filter.
+Variant metadata is stored in the top-level Variant group in a binary 
`metadata` column regardless of whether the Variant value is shredded.
 
-# Parquet Example
+All `value` columns within the Variant must use the same `metadata`.
+All field names of a Variant, whether shredded or not, must be present in the 
metadata.
 
-Consider the following Parquet schema together with how Variant values might 
be mapped to it.
-Notice that we represent each shredded field in `object` as a group of two 
fields, `typed_value` and `variant_value`.
-We extract all homogenous data items of a certain path into `typed_value`, and 
set aside incompatible data items in `variant_value`.
-Intuitively, incompatibilities within the same path may occur because we store 
the shredding schema per Parquet file, and each file can contain several row 
groups.
-Selecting a type for each field that is acceptable for all rows would be 
impractical because it would require buffering the contents of an entire file 
before writing.
+## Value Shredding
 
-Typically, the expectation is that `variant_value` exists at every level as an 
option, along with one of `object`, `array` or `typed_value`.
-If the actual Variant value contains a type that does not match the provided 
schema, it is stored in `variant_value`.
-An `variant_value` may also be populated if an object can be partially 
represented: any fields that are present in the schema must be written to those 
fields, and any missing fields are written to `variant_value`.
-
-The `metadata` column is unchanged from its unshredded representation, and may 
be referenced in `variant_value` fields in the shredded data.
+Variant values are stored in Parquet fields named `value`.
+Each `value` field may have an associated shredded field named `typed_value` 
that stores the value when it matches a specific type.
+When `typed_value` is present, readers **must** reconstruct shredded values 
according to this specification.
 
+For example, a Variant field, `measurement` may be shredded as long values by 
adding `typed_value` with type `int64`:
 ```
-optional group variant_col {
- required binary metadata;
- optional binary variant_value;
- optional group object {
-  optional group a {
-   optional binary variant_value;
-   optional int64 typed_value;
-  }
-  optional group b {
-   optional binary variant_value;
-   optional group object {
-    optional group c {
-      optional binary variant_value;
-      optional binary typed_value (STRING);
-    }
-   }
-  }
- }
+required group measurement (VARIANT) {
+  required binary metadata;
+  optional binary value;
+  optional int64 typed_value;
 }
 ```
 
-| Variant Value | Top-level variant_value | b.variant_value | a.typed_value | 
a.variant_value | b.object.c.typed_value | b.object.c.variant_value | Notes | 
-|---------------|-------------------------|-----------------|---------------|-----------------|------------------------|--------------------------|-------|
-| {a: 123, b: {c: “hello”}} | null | null | 123 | null | hello | null | All 
values shredded |
-| {a: 1.23, b: {c: “123”}} | null | null | null | 1.23 | 123 | null | a is not 
an integer |
-| {a: 123, b: {c: null}} | null | null | null | 123 | null | null | b.object.c 
set to non-null to indicate VariantNull |
-| {a: 123, b: {} | null | null | null | 123 | null | null | b.object.c set to 
null, to indicate that c is missing |
-| {a: 123, d: 456} | {d: 456} | null | 123 | null | null | null | Extra field 
d is stored as variant_value |
-| [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | [{a: 1, b: {c: 2}}, {a: 3, b: {c: 
4}}] | null | null | null | null | null | Not an object |
-
-# Parquet Layout
+The Parquet columns used to store variant metadata and values must be accessed 
by name, not by position.
 
-The `array` and `object` fields represent Variant array and object types, 
respectively.
-Arrays must use the three-level list structure described in 
[LogicalTypes.md](LogicalTypes.md).
+The series of measurements `34, null, "n/a", 100` would be stored as:
 
-An `object` field must be a group.
-Each field name of this inner group corresponds to the Variant value's object 
field name.
-Each inner field's type is a recursively shredded variant value: that is, the 
fields of each object field must be one or more of `object`, `array`, 
`typed_value` or `variant_value`.
+| Value   | `metadata`       | `value`               | `typed_value` |
+|---------|------------------|-----------------------|---------------|
+| 34      | `01 00` v1/empty | null                  | `34`          |
+| null    | `01 00` v1/empty | `00` (null)           | null          |
+| "n/a"   | `01 00` v1/empty | `13 6E 2F 61` (`n/a`) | null          |
+| 100     | `01 00` v1/empty | null                  | `100`         |
 
-Similarly the elements of an `array` must be a group containing one or more of 
`object`, `array`, `typed_value` or `variant_value`.
+Both `value` and `typed_value` are optional fields used together to encode a 
single value.
+Values in the two fields must be interpreted according to the following table:
 
-Each leaf in the schema can store an arbitrary Variant value.
-It contains an `variant_value` binary field and a `typed_value` field.
-If non-null, `variant_value` represents the value stored as a Variant binary.
-The `typed_value` field may be any type that has a corresponding Variant type.
-For each value in the data, at most one of the `typed_value` and 
`variant_value` may be non-null.
-A writer may omit either field, which is equivalent to all rows being null.
+| `value`  | `typed_value` | Meaning                                           
          |
+|----------|---------------|-------------------------------------------------------------|
+| null     | null          | The value is missing; only valid for shredded 
object fields |
+| non-null | null          | The value is present and may be any type, 
including null    |
+| null     | non-null      | The value is present and is the shredded type     
          |
+| non-null | non-null      | The value is present and is a partially shredded 
object     |
 
-Dictionary IDs in a `variant_value` field refer to entries in the top-level 
`metadata` field.
+An object is _partially shredded_ when the `value` is an object and the 
`typed_value` is a shredded object.
 
-For an `object`, a null field means that the field does not exist in the 
reconstructed Variant object.
-All elements of an `array` must be non-null, since array elements cannote be 
missing.
+If both fields are non-null and either is not an object, the value is invalid. 
Readers must either fail or return the `typed_value`.

Review Comment:
   > @rdblue and I also talked about this for a long time and I think I favor 
the current text. I feel like the additional text adds a bit of confusion 
around this.
   
   We also talked about it in the sync and didn't come to the conclusion.  IIUC 
@rdblue wanted the error handling language to eliminate the possibility of 
trying to define an alternative behavior down the road by one specific reader. 
I don't think we should be mandating this in the spec but I do agree we should 
be clarifying this won't be relitigated.
   
   > A shredded reading of a field is always correct since a shredded reader 
will not be able to check an unshreddeed value for inconsistency, a reader 
using an un-shredded value when the shredded value is present is always 
incorrect.
   
   I think semantics are important here. I tried to cover this by saying it is 
"consistent", i.e. it would always provided consistent results to the end user, 
which is a nice property to have and I agree most implementations should use it 
if they aren't going to error out.
   
   "correct", I think, is a property of actually returning the the variant data 
without modification.  Without understanding the bugs that introduced 
inconsistent shredding, I don't think it is possible say for sure the shredded 
values are correct.  
   
   The reason why I think it is important to say it is not "correct" is because 
in other instances (e.g. requiring no overlap between shredded/not shredded 
fields) to write out of spec files that would not face any issues (as an 
alternative I think it would OK to have overlapping values as long as they are 
consistent, then the real question becomes what happens if the values are not 
actually consistent, in this case I think it would really be very hard to 
understand what the correct results are).
   
   As an analogy from another part of parquet (and a real world example I've 
encounted).  Assume a schema like `list<struct<required a int, required b 
int>>` we do not require readers to check that repetition levels and definition 
levels are consistent for columns a and b (and many don't).  If they are not 
equal it is a bug, we can't really say which one, or if either is, "correct" 
without understanding the bug that introduced the inconsistency.    A 
"consistent" result would be to always use the left most columns repetition and 
definition levels.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org
For additional commands, e-mail: issues-h...@parquet.apache.org

Reply via email to