corwinjoy commented on code in PR #16351:
URL: https://github.com/apache/datafusion/pull/16351#discussion_r2165246656


##########
docs/source/user-guide/configs.md:
##########
@@ -81,6 +81,8 @@ Environment variables are read during `SessionConfig` 
initialisation so they mus
 | datafusion.execution.parquet.allow_single_file_parallelism              | 
true                      | (writing) Controls whether DataFusion will attempt 
to speed up writing parquet files by serializing them in parallel. Each column 
in each row group in each output file are serialized in parallel leveraging a 
maximum possible core count of n_files*n_row_groups*n_columns.                  
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                        |
 | datafusion.execution.parquet.maximum_parallel_row_group_writers         | 1  
                       | (writing) By default parallel parquet writer is tuned 
for minimum memory usage in a streaming execution plan. You may see a 
performance benefit when writing large parquet files by increasing 
maximum_parallel_row_group_writers and 
maximum_buffered_record_batches_per_stream if your system has idle cores and 
can tolerate additional memory usage. Boosting these values is likely 
worthwhile when writing out already in-memory data, such as from a cached data 
frame.                                                                          
                                                                                
                                                                                
                                                                                
                                                                                
                                |
 | datafusion.execution.parquet.maximum_buffered_record_batches_per_stream | 2  
                       | (writing) By default parallel parquet writer is tuned 
for minimum memory usage in a streaming execution plan. You may see a 
performance benefit when writing large parquet files by increasing 
maximum_parallel_row_group_writers and 
maximum_buffered_record_batches_per_stream if your system has idle cores and 
can tolerate additional memory usage. Boosting these values is likely 
worthwhile when writing out already in-memory data, such as from a cached data 
frame.                                                                          
                                                                                
                                                                                
                                                                                
                                                                                
                                |
+| datafusion.execution.parquet.file_decryption_properties                 | 
NULL                      | Optional file decryption properties                 
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                    |

Review Comment:
   Yes. This is a good suggestion. We actually need to update the 
`TableParquetOptions` docs and remove this entry since this got moved. @alamb 
One question. Can you suggest where to put a CLI usage example? I guess I could 
add something under `datafusion-cli/tests/sql`. The options will look like what 
we have for KMS but I want to setup a running example. e.g. for the KMS we have:
   ```
       let ddl = format!(
           "CREATE EXTERNAL TABLE encrypted_parquet_table_2 \
           STORED AS PARQUET LOCATION '{file_path}' OPTIONS (\
           'format.crypto.factory_id' '{ENCRYPTION_FACTORY_ID}' \
           )"
       );
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to