reviews
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [SPARK-56315][SQL] Pre-aggregate before `Expand` to reduce data amplification for multiple `COUNT(DISTINCT)` [spark]
via GitHub
Re: [PR] [SPARK-56315][SQL] Pre-aggregate before `Expand` to reduce data amplification for multiple `COUNT(DISTINCT)` [spark]
via GitHub
Re: [PR] [SPARK-56315][SQL] Pre-aggregate before `Expand` to reduce data amplification for multiple `COUNT(DISTINCT)` [spark]
via GitHub
Re: [PR] [SPARK-56315][SQL] Pre-aggregate before `Expand` to reduce data amplification for multiple `COUNT(DISTINCT)` [spark]
via GitHub
Re: [PR] [SPARK-56315][SQL] Pre-aggregate before `Expand` to reduce data amplification for multiple `COUNT(DISTINCT)` [spark]
via GitHub
Re: [PR] [SPARK-56315][SQL] Pre-aggregate before `Expand` to reduce data amplification for multiple `COUNT(DISTINCT)` [spark]
via GitHub
Re: [PR] [SPARK-56315][SQL] Pre-aggregate before `Expand` to reduce data amplification for multiple `COUNT(DISTINCT)` [spark]
via GitHub
Re: [PR] [SPARK-56092][SS] Fix NPE in StreamingQueryException.toString() when cause is null [spark]
via GitHub
Re: [PR] [SPARK-56092][SS] Fix NPE in StreamingQueryException.toString() when cause is null [spark]
via GitHub
Re: [PR] [SPARK-56092][SS] Fix NPE in StreamingQueryException.toString() when cause is null [spark]
via GitHub
Re: [PR] [SPARK-56092][SS] Fix NPE in StreamingQueryException.toString() when cause is null [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
Re: [PR] [SPARK-55568][SQL] Separate schema construction from field stats collection [spark]
via GitHub
[PR] [SPARK-56178][SQL] MSCK REPAIR TABLE for V2 file tables [spark]
via GitHub
Re: [PR] [SPARK-56178][SQL] MSCK REPAIR TABLE for V2 file tables [spark]
via GitHub
[PR] [SPARK-56177][SQL] V2 file bucketing write support [spark]
via GitHub
Re: [PR] [SPARK-56177][SQL] V2 file bucketing write support [spark]
via GitHub
[PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
Re: [PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
Re: [PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
Re: [PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
Re: [PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
Re: [PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
Re: [PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
Re: [PR] [SPARK-43752][SQL] Support column DEFAULT values in V2 write commands [spark]
via GitHub
[PR] [SPARK-56314][SQL][TESTS] Avoid uncessary RDD->DataFrame conversion in `SQLTestData` [spark]
via GitHub
Re: [PR] [SPARK-56314][SQL][TESTS] Avoid uncessary RDD->DataFrame conversion in `SQLTestData` [spark]
via GitHub
Re: [PR] [SPARK-56314][SQL][TESTS] Avoid uncessary RDD->DataFrame conversion in `SQLTestData` [spark]
via GitHub
Re: [PR] [SPARK-56034][SQL] Push down Join through Union when the right side is broadcastable [spark]
via GitHub
Re: [PR] [SPARK-56034][SQL] Push down Join through Union when the right side is broadcastable [spark]
via GitHub
[PR] [SPARK-54938][TEST][FOLLOW-UP] Fix `test_pyarrow_array_type_inference` for pandas >= 3 [spark]
via GitHub
Re: [PR] [SPARK-54938][TEST][FOLLOW-UP] Fix `test_pyarrow_array_type_inference` for pandas >= 3 [spark]
via GitHub
Re: [PR] [SPARK-54938][PYTHON][TEST][FOLLOW-UP] Fix `test_pyarrow_array_type_inference` for pandas >= 3 [spark]
via GitHub
Re: [PR] [SPARK-54938][PYTHON][TEST][FOLLOW-UP] Fix `test_pyarrow_array_type_inference` for pandas >= 3 [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
[PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
Re: [PR] [SPARK-56125][SQL] Simplify schema calculation for Merge Into Schema Evolution [spark]
via GitHub
[PR] [SPARK-56189][PYTHON] Refactor SQL_WINDOW_AGG_ARROW_UDF to use ArrowStreamSerializer [spark]
via GitHub
Re: [PR] [SPARK-56189][PYTHON] Refactor SQL_WINDOW_AGG_ARROW_UDF [spark]
via GitHub
Re: [PR] [SPARK-56189][PYTHON] Refactor SQL_WINDOW_AGG_ARROW_UDF [spark]
via GitHub
Re: [PR] [SPARK-56189][PYTHON] Refactor SQL_WINDOW_AGG_ARROW_UDF [spark]
via GitHub
Re: [PR] [SPARK-56189][PYTHON] Refactor SQL_WINDOW_AGG_ARROW_UDF [spark]
via GitHub
Re: [PR] [SPARK-56189][PYTHON] Refactor SQL_WINDOW_AGG_ARROW_UDF [spark]
via GitHub
[PR] [SPARK-56313][PYTHON] Add type hint for rddsampler.py [spark]
via GitHub
Re: [PR] [SPARK-56313][PYTHON] Add type hint for rddsampler.py [spark]
via GitHub
Re: [PR] [SPARK-56313][PYTHON] Add type hint for rddsampler.py [spark]
via GitHub
Re: [PR] [SPARK-56313][PYTHON] Add type hint for rddsampler.py [spark]
via GitHub
Re: [PR] [SPARK-56313][PYTHON] Add type hint for rddsampler.py [spark]
via GitHub
[PR] [MINOR] Update Java badge to 21 [spark-kubernetes-operator]
via GitHub
Re: [PR] [MINOR] Update Java badge to 21 [spark-kubernetes-operator]
via GitHub
Re: [PR] [MINOR] Update Java badge to 21 [spark-kubernetes-operator]
via GitHub
[PR] [SPARK-56219][PS][FOLLOW-UP] Fix groupby idxmax and idxmin skipna=False for pandas 2.2 [spark]
via GitHub
Re: [PR] [SPARK-56219][PS][FOLLOW-UP] Fix groupby idxmax and idxmin skipna=False for pandas 2.2 [spark]
via GitHub
Re: [PR] [SPARK-56219][PS][FOLLOW-UP] Keep legacy groupby idxmax and idxmin skipna=False behavior for pandas 2 [spark]
via GitHub
Re: [PR] [SPARK-56219][PS][FOLLOW-UP] Keep legacy groupby idxmax and idxmin skipna=False behavior for pandas 2 [spark]
via GitHub
[PR] [SPARK-xxxxx][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-xxxxx][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-xxxxx][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-xxxxx][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
Re: [PR] [SPARK-56350][SQL] Skip ColumnarToRow for Arrow-backed input to Python UDFs [spark]
via GitHub
[PR] [SPARK-56311][PYTHON] Add type hints for daemon.py [spark]
via GitHub
Re: [PR] [SPARK-56311][PYTHON] Add type hints for daemon.py [spark]
via GitHub
Re: [PR] [SPARK-56311][PYTHON] Add type hints for daemon.py [spark]
via GitHub
[PR] [SPARK-56309] Upgrade `log4j` to 2.25.4 [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56309] Upgrade `log4j` to 2.25.4 [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56309] Upgrade `log4j` to 2.25.4 [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56309] Upgrade `log4j` to 2.25.4 [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56309] Upgrade `log4j` to 2.25.4 [spark-kubernetes-operator]
via GitHub
Re: [I] Is this work still progressing? [spark-connect-rust]
via GitHub
[PR] [SPARK-56310][PYTHON] Handle pandas 3 dtype in DataFrame.toPandas [spark]
via GitHub
Re: [PR] [SPARK-56310][PYTHON] Handle pandas 3 dtype in DataFrame.toPandas [spark]
via GitHub
Re: [PR] [SPARK-56310][PYTHON] Handle pandas 3 dtype in DataFrame.toPandas [spark]
via GitHub
Re: [PR] [SPARK-56310][PYTHON] Handle pandas 3 dtype in DataFrame.toPandas [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
Re: [PR] [SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins [spark]
via GitHub
[PR] [SPARK-56123][PYTHON][FOLLOWUP] Avoid using concat_batches for old version of pyarrow [spark]
via GitHub
Re: [PR] [SPARK-56123][PYTHON][FOLLOWUP] Avoid using concat_batches for old version of pyarrow [spark]
via GitHub
Re: [PR] [SPARK-56123][PYTHON][FOLLOWUP] Avoid using concat_batches for old version of pyarrow [spark]
via GitHub
Re: [PR] [SPARK-56123][PYTHON][FOLLOWUP] Avoid using concat_batches for old version of pyarrow [spark]
via GitHub
[PR] [WIP][SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [WIP][SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-55715][SQL] Keep `outputOrdering` when `GroupPartitionsExec` coalesces partitions [spark]
via GitHub
Re: [PR] [SPARK-56274] Simplify `SparkClusterSubmissionWorker.getResourceSpec` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56274] Simplify `SparkClusterSubmissionWorker.getResourceSpec` [spark-kubernetes-operator]
via GitHub
[PR] [SPARK-55109][SQL] Enhance RaiseError to support valid SQL [spark]
via GitHub
Re: [PR] [SPARK-55109][SQL] Enhance RaiseError to support valid SQL [spark]
via GitHub
Re: [PR] [SPARK-55109][SQL] Enhance RaiseError to generate valid SQL [spark]
via GitHub
Re: [PR] [SPARK-55109][SQL] Enhance RaiseError to generate valid SQL [spark]
via GitHub
Re: [PR] [SPARK-55109][SQL] Enhance RaiseError to generate valid SQL [spark]
via GitHub
Re: [PR] [SPARK-55109][SQL] Enhance RaiseError to generate valid SQL [spark]
via GitHub
Re: [PR] [SPARK-55109][SQL] Enhance RaiseError to generate valid SQL [spark]
via GitHub
Re: [PR] [SPARK-55109][SQL] Enhance RaiseError to generate valid SQL [spark]
via GitHub
Re: [PR] [SPARK-46036][SQL] Removing error-class from raise_error function [spark]
via GitHub
[PR] [SPARK-56308] Remove invalid `log4j2.contextSelector` from test `log4j2.properties` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56308] Remove invalid `log4j2.contextSelector` from test `log4j2.properties` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56308] Remove invalid `log4j2.contextSelector` from test `log4j2.properties` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56308] Remove invalid `log4j2.contextSelector` from test `log4j2.properties` [spark-kubernetes-operator]
via GitHub
[PR] [SPARK-56307] Upgrade `log4j` to 2.25.4 [spark]
via GitHub
Re: [PR] [SPARK-56307][BUILD] Upgrade `log4j` to 2.25.4 [spark]
via GitHub
Re: [PR] [SPARK-56307][BUILD] Upgrade `log4j` to 2.25.4 [spark]
via GitHub
Re: [PR] [SPARK-56307][BUILD] Upgrade `log4j` to 2.25.4 [spark]
via GitHub
Re: [PR] [SPARK-56307][BUILD] Upgrade `log4j` to 2.25.4 [spark]
via GitHub
[I] Spark Declarative Pipelines and Unity Catalog [spark]
via GitHub
Re: [I] Spark Declarative Pipelines and Unity Catalog [spark]
via GitHub
Re: [PR] Change volumeMounts and volumes types to array [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56235][CORE] Add reverse index in TaskSetManager to avoid O(N) scans in executorLost [spark]
via GitHub
Re: [PR] [SPARK-56235][CORE] Add reverse index in TaskSetManager to avoid O(N) scans in executorLost [spark]
via GitHub
Re: [PR] [SPARK-56074][INFRA] Improve AGENTS.md with inline build/test commands, PR workflow, and dev notes [spark]
via GitHub
Re: [PR] [SPARK-56074][INFRA] Improve AGENTS.md with inline build/test commands, PR workflow, and dev notes [spark]
via GitHub
Re: [PR] [SPARK-43413][SQL] Support QUALIFY clause [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
[I] Cannot use deploy mode: cluster from python code. [spark-kubernetes-operator]
via GitHub
Re: [I] Cannot use deploy mode: cluster from python code. [spark-kubernetes-operator]
via GitHub
Re: [I] Cannot use deploy mode: cluster from python code. [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-56190][SQL] Support nested partition columns for DSV2 PartitionPredicate [spark]
via GitHub
Re: [PR] [SPARK-56190][SQL] Support nested partition columns for DSV2 PartitionPredicate [spark]
via GitHub
Re: [PR] [SPARK-56190][SQL] Support nested partition columns for DSV2 PartitionPredicate [spark]
via GitHub
Re: [PR] [SPARK-56190][SQL] Support nested partition columns for DSV2 PartitionPredicate [spark]
via GitHub
Re: [I] [DOCS] Document return types for aggregate functions (stddev, variance, etc.) [spark]
via GitHub
Earlier messages
Later messages