[I] AvroParquetWriter cache old aws config even after close, clean and create new writer [parquet-java]

2024-12-20 Thread via GitHub


CandiceSu opened a new issue, #3106:
URL: https://github.com/apache/parquet-java/issues/3106

   ### Describe the bug, including details regarding any error messages, 
version, and platform.
   
   Hi we created one AvroParquetWriter to write to aws s3.
   This is how we create the writer:
   
   `ParquetWriter writer = 
AvroParquetWriter.builder(path)
   .withConf(conf)
   .withWriteMode(ParquetFileWriter.Mode.OVERWRITE)
   .withSchema(avroSchema)
   .build()`
   
   And this is how we create the conf:
   `AwsCredentialsProvider credentialsProvider = 
DefaultCredentialsProvider.builder().build();
 var credentials = credentialsProvider.resolveCredentials();
 AwsSessionCredentials sessionCredentials = (AwsSessionCredentials) 
credentials;
   
 Configuration conf = new Configuration();
 conf.set("fs.s3a.access.key", sessionCredentials.accessKeyId());
 conf.set("fs.s3a.secret.key", sessionCredentials.secretAccessKey());
 conf.set("fs.s3a.session.token", sessionCredentials.sessionToken());
 conf.set("fs.s3a.endpoint", "s3.my_region.amazonaws.com");
   `
   
   And we are running a streaming application in ECS, each time a task come in, 
we get a new conf and create a new writer.
   We see the token and everything are correct and refreshed each time, but 
writer fail to write to s3 and throw 
'software.amazon.awssdk.services.s3.model.S3Exception: The provided token has 
expired' after the application running for few hours.
   
   So it looks like the writer, once initialized 1st time, it keep using the 
1st time configuration - even if we closed the writer, cleared cache, create 
new writer with new conf... And after few hours of course you hit token expire 
issue when that 1st time token expires.
   
   We even did a small test - create writer 1st time with correct conf, write 
to s3 works. Close 1st writer, create writer 2nd time with invalid conf, write 
to s3 still works!!!
   
   Is there anything wrong in the way we initialize the writer or conf? 
   Thanks!
   
   
   
   
   ### Component(s)
   
   Avro


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org
For additional commands, e-mail: issues-h...@parquet.apache.org



Re: [I] How to disable statistics in version 1.13.1? [parquet-java]

2024-12-20 Thread via GitHub


felipepessoto commented on issue #3103:
URL: https://github.com/apache/parquet-java/issues/3103#issuecomment-2557895492

   Thanks. I've set `parquet.statistics.truncate.length` to 1. The 
`parquet.columnindex.truncate.length` I wasn't sure how it works but using the 
`parquet.statistics.truncate.length` also worked for byte array


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org
For additional commands, e-mail: issues-h...@parquet.apache.org



Re: [PR] PARQUET-34: implement Size() filter for repeated columns [parquet-java]

2024-12-20 Thread via GitHub


emkornfield commented on PR #3098:
URL: https://github.com/apache/parquet-java/pull/3098#issuecomment-2557642415

   I can try to look in more detail but stats can certainly be used here, I 
imagine they are most useful for repeated fieds when trying to discriminate 
between repeated fields that mostly have 0 or 1 element, and trying to filter 
out cases with > 0  or 1 elements. e.g. if all fields have 0 observed 
rep_levels of 1, then one knows for sure all lists are of length 0 or 1 
(whether there are any lists of length 0 or one can be deteremined by 
inspecting the def level histogram).  For larger cardinality lists the 
filtering power diminishes significanly (its hard to distinguish based on 
histograms the difference between many very small lists vs one very large one).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org
For additional commands, e-mail: issues-h...@parquet.apache.org



Re: [PR] Simplify Variant shredding and refactor for clarity [parquet-format]

2024-12-20 Thread via GitHub


emkornfield commented on code in PR #461:
URL: https://github.com/apache/parquet-format/pull/461#discussion_r1894292367


##
VariantShredding.md:
##
@@ -25,290 +25,320 @@
 The Variant type is designed to store and process semi-structured data 
efficiently, even with heterogeneous values.
 Query engines encode each Variant value in a self-describing format, and store 
it as a group containing `value` and `metadata` binary fields in Parquet.
 Since data is often partially homogenous, it can be beneficial to extract 
certain fields into separate Parquet columns to further improve performance.
-We refer to this process as **shredding**.
-Each Parquet file remains fully self-describing, with no additional metadata 
required to read or fully reconstruct the Variant data from the file.
-Combining shredding with a binary residual provides the flexibility to 
represent complex, evolving data with an unbounded number of unique fields 
while limiting the size of file schemas, and retaining the performance benefits 
of a columnar format.
+This process is **shredding**.
 
-This document focuses on the shredding semantics, Parquet representation, 
implications for readers and writers, as well as the Variant reconstruction.
-For now, it does not discuss which fields to shred, user-facing API changes, 
or any engine-specific considerations like how to use shredded columns.
-The approach builds upon the [Variant Binary Encoding](VariantEncoding.md), 
and leverages the existing Parquet specification.
+Shredding enables the use of Parquet's columnar representation for more 
compact data encoding, column statistics for data skipping, and partial 
projections.
 
-At a high level, we replace the `value` field of the Variant Parquet group 
with one or more fields called `object`, `array`, `typed_value`, and 
`variant_value`.
-These represent a fixed schema suitable for constructing the full Variant 
value for each row.
+For example, the query `SELECT variant_get(event, '$.event_ts', 'timestamp') 
FROM tbl` only needs to load field `event_ts`, and if that column is shredded, 
it can be read by columnar projection without reading or deserializing the rest 
of the `event` Variant.
+Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event, 
'$.event_type', 'string') = 'signup'`, the `event_type` shredded column 
metadata can be used for skipping and to lazily load the rest of the Variant.
 
-Shredding allows a query engine to reap the full benefits of Parquet's 
columnar representation, such as more compact data encoding, min/max statistics 
for data skipping, and I/O and CPU savings from pruning unnecessary fields not 
accessed by a query (including the non-shredded Variant binary data).
-Without shredding, any query that accesses a Variant column must fetch all 
bytes of the full binary buffer.
-With shredding, we can get nearly equivalent performance as in a relational 
(scalar) data model.
+## Variant Metadata
 
-For example, `select variant_get(variant_col, ‘$.field1.inner_field2’, 
‘string’) from tbl` only needs to access `inner_field2`, and the file scan 
could avoid fetching the rest of the Variant value if this field was shredded 
into a separate column in the Parquet schema.
-Similarly, for the query `select * from tbl where variant_get(variant_col, 
‘$.id’, ‘integer’) = 123`, the scan could first decode the shredded `id` 
column, and only fetch/decode the full Variant value for rows that pass the 
filter.
+Variant metadata is stored in the top-level Variant group in a binary 
`metadata` column regardless of whether the Variant value is shredded.
 
-# Parquet Example
+All `value` columns within the Variant must use the same `metadata`.
+All field names of a Variant, whether shredded or not, must be present in the 
metadata.
 
-Consider the following Parquet schema together with how Variant values might 
be mapped to it.
-Notice that we represent each shredded field in `object` as a group of two 
fields, `typed_value` and `variant_value`.
-We extract all homogenous data items of a certain path into `typed_value`, and 
set aside incompatible data items in `variant_value`.
-Intuitively, incompatibilities within the same path may occur because we store 
the shredding schema per Parquet file, and each file can contain several row 
groups.
-Selecting a type for each field that is acceptable for all rows would be 
impractical because it would require buffering the contents of an entire file 
before writing.
+## Value Shredding
 
-Typically, the expectation is that `variant_value` exists at every level as an 
option, along with one of `object`, `array` or `typed_value`.
-If the actual Variant value contains a type that does not match the provided 
schema, it is stored in `variant_value`.
-An `variant_value` may also be populated if an object can be partially 
represented: any fields that are present in the schema must be written to those 
fields, and any missing fields are written to `variant_value`.
-

Re: [PR] GH-3080: HadoopStreams to support ByteBufferPositionedReadable [parquet-java]

2024-12-20 Thread via GitHub


steveloughran commented on PR #3096:
URL: https://github.com/apache/parquet-java/pull/3096#issuecomment-2557662530

   I'm away until 2025; will reply to comments then. Thanks for the review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@parquet.apache.org
For additional commands, e-mail: issues-h...@parquet.apache.org