reviews
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [SPARK-55651] Improve `create_spark_jira.py` to support the version parameter [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55651] Improve `create_spark_jira.py` to support the version parameter [spark-kubernetes-operator]
via GitHub
[PR] [SPARK-55650] Improve `create_spark_jira.py` to support the version parameter [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55650] Improve `create_spark_jira.py` to support the version parameter [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55650] Improve `create_spark_jira.py` to support the version parameter [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55650] Improve `create_spark_jira.py` to support the version parameter [spark-connect-swift]
via GitHub
[PR] [SPARK-55649][K8S] Promote `Kubernetes(Driver|Executor)?FeatureConfigStep` to `Stable` [spark]
via GitHub
Re: [PR] [SPARK-55649][K8S] Promote `Kubernetes(Driver|Executor)?FeatureConfigStep` traits to `Stable` [spark]
via GitHub
Re: [PR] [SPARK-55649][K8S] Promote `Kubernetes(Driver|Executor)?FeatureConfigStep` traits to `Stable` [spark]
via GitHub
Re: [PR] [SPARK-55649][K8S] Promote `Kubernetes(Driver|Executor)?FeatureConfigStep` traits to `Stable` [spark]
via GitHub
[I] Add `Information for new contributors` in Issues [spark]
via GitHub
Re: [I] Add `Information for new contributors` in Issues [spark]
via GitHub
[PR] [SPARK-55296][PS][FOLLOW-UP] Disconnect the anchor for more cases to mimic the CoW mode behavior [spark]
via GitHub
Re: [PR] [SPARK-55296][PS][FOLLOW-UP] Disconnect the anchor for more cases to mimic the CoW mode behavior [spark]
via GitHub
Re: [PR] [SPARK-55296][PS][FOLLOW-UP] Disconnect the anchor for more cases to mimic the CoW mode behavior [spark]
via GitHub
Re: [PR] [SPARK-55296][PS][FOLLOW-UP] Disconnect the anchor for more cases to mimic the CoW mode behavior [spark]
via GitHub
[PR] [SPARK-55648][PS] Handle an unexpected keyword argument error `groupby(axis)` with pandas 3 [spark]
via GitHub
Re: [PR] [SPARK-55648][PS] Handle an unexpected keyword argument error `groupby(axis)` with pandas 3 [spark]
via GitHub
Re: [PR] [SPARK-55648][PS] Handle an unexpected keyword argument error `groupby(axis)` with pandas 3 [spark]
via GitHub
Re: [PR] [SPARK-55648][PS] Handle an unexpected keyword argument error `groupby(axis)` with pandas 3 [spark]
via GitHub
[PR] [SPARK-55647][SQL] Fix ConstantPropagation incorrectly replacing attributes with non-binary-stable collations [spark]
via GitHub
Re: [PR] [SPARK-55647][SQL] Fix `ConstantPropagation` incorrectly replacing attributes with non-binary-stable collations [spark]
via GitHub
Re: [PR] [SPARK-55647][SQL] Fix `ConstantPropagation` incorrectly replacing attributes with non-binary-stable collations [spark]
via GitHub
[PR] [SPARK-55646] Refactored SQLExecution.withThreadLocalCaptured to separate thread-local capture from execution [spark]
via GitHub
Re: [PR] [SPARK-55646][SQL] Refactored SQLExecution.withThreadLocalCaptured to separate thread-local capture from execution [spark]
via GitHub
Re: [PR] [SPARK-55646][SQL] Refactored SQLExecution.withThreadLocalCaptured to separate thread-local capture from execution [spark]
via GitHub
Re: [PR] [SPARK-55646][SQL] Refactored SQLExecution.withThreadLocalCaptured to separate thread-local capture from execution [spark]
via GitHub
Re: [PR] [SPARK-55646][SQL] Refactored SQLExecution.withThreadLocalCaptured to separate thread-local capture from execution [spark]
via GitHub
[PR] [SPARK-55644] Add `InstanceConfig` example [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55644] Add `instanceConfig` `SparkApplication` example [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55644] Add `instanceConfig` `SparkApplication` example [spark-kubernetes-operator]
via GitHub
[PR] [SPARK-55643][INFRA] Add connection timeout to JIRA client to prevent hanging and enable retries [spark]
via GitHub
Re: [PR] [SPARK-55643][INFRA] Add connection timeout to JIRA client to prevent hanging and enable retries [spark]
via GitHub
Re: [PR] [SPARK-55643][INFRA] Add connection timeout to JIRA client to prevent hanging and enable retries [spark]
via GitHub
Re: [PR] [SPARK-55643][INFRA] Add connection timeout to JIRA client to prevent hanging and enable retries [spark]
via GitHub
Re: [PR] [SPARK-55643][INFRA] Add connection timeout to JIRA client to prevent hanging and enable retries [spark]
via GitHub
[PR] [SPARK-55642] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55642] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55642] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55642] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-connect-swift]
via GitHub
[PR] [SPARK-55641] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55641] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55641] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55641] Add connection timeout to JIRA client to prevent hanging and enable retries [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-54325][TESTS] Use custom matcher in ExecutorSuite testThrowable [spark]
via GitHub
Re: [PR] [SPARK-54325][TESTS] Use custom matcher in ExecutorSuite testThrowable [spark]
via GitHub
Re: [PR] [SPARK-54325][TESTS] Use custom matcher in ExecutorSuite testThrowable [spark]
via GitHub
Re: [PR] [SPARK-46168][PS] Implementation of idxmin Axis argument [spark]
via GitHub
Re: [PR] [SPARK-46168][PS] Implementation of idxmin Axis argument [spark]
via GitHub
Re: [PR] [SPARK-46168][PS] Implementation of idxmin Axis argument [spark]
via GitHub
Re: [PR] [SPARK-53753][BUILD][SQL] shade antlr4-runtime [spark]
via GitHub
Re: [PR] [SPARK-53753][BUILD][SQL] shade antlr4-runtime [spark]
via GitHub
Re: [PR] [SPARK-53504][SQL] Type framework [spark]
via GitHub
[PR] [SPARK-54868][PYTHON][INFRA][FOLLOWUP] Add PYSPARK_TEST_TIMEOUT to hosted runner test action [spark]
via GitHub
Re: [PR] [SPARK-54868][PYTHON][INFRA][FOLLOWUP] Add PYSPARK_TEST_TIMEOUT to hosted runner test action [spark]
via GitHub
Re: [PR] [SPARK-54868][PYTHON][INFRA][FOLLOWUP] Add PYSPARK_TEST_TIMEOUT to hosted runner test action [spark]
via GitHub
[PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS][FOLLOW-UP] Fix `_ignore_arrow_dtypes` util for DataFrame [spark]
via GitHub
[PR] [Minor] Update SparkSessionBuilder.create to use existing SparkContext if it already exists [spark]
via GitHub
Re: [PR] [Minor] Update SparkSessionBuilder.create to use existing SparkContext if it already exists [spark]
via GitHub
Re: [PR] [Minor] Update SparkSessionBuilder.create to use existing SparkContext if it already exists [spark]
via GitHub
Re: [PR] [SPARK-55658] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] [SPARK-55658] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] [SPARK-55658] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] [SPARK-55658] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] [SPARK-55658][PYTHON] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] [SPARK-55658][PYTHON] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] [SPARK-55658][PYTHON] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] [SPARK-55658][PYTHON] SparkSessionBuilder.create in PySpark classic should mirror getOrCreate path as much as possible [spark]
via GitHub
Re: [PR] Upgrade Spark 4.0 [spark]
via GitHub
Re: [PR] Upgrade Spark 4.0 [spark]
via GitHub
[PR] [WIP][DOCS] Clarify DataFrames in quickstart [spark]
via GitHub
Re: [PR] [WIP][DOCS] Clarify DataFrames in quickstart [spark]
via GitHub
Re: [PR] [WIP][DOCS] Clarify DataFrames in quickstart [spark]
via GitHub
[PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
Re: [PR] [SPARK-55631][SQL] ALTER TABLE must invalidate cache for DSv2 tables [spark]
via GitHub
[PR] fix(tests): add missing main block to test_session.py [spark]
via GitHub
Re: [PR] fix(tests): add missing main block to test_session.py [spark]
via GitHub
Re: [PR] fix(tests): add missing main block to test_session.py [spark]
via GitHub
[PR] [DRAFT][Geo][SQL] Propagate WKB parsing errors for Geometry and Geography [spark]
via GitHub
Re: [PR] [SPARK-55640][Geo][SQL] Propagate WKB parsing errors for Geometry and Geography [spark]
via GitHub
Re: [PR] [SPARK-55640][Geo][SQL] Propagate WKB parsing errors for Geometry and Geography [spark]
via GitHub
[PR] [SPARK-55637][SQL][TESTS] Generalize `postgres-krb-setup.sh` to find config files [spark]
via GitHub
Re: [PR] [SPARK-55637][SQL][TESTS] Generalize `postgres-krb-setup.sh` to find config files [spark]
via GitHub
Re: [PR] [SPARK-55637][SQL][TESTS] Generalize `postgres-krb-setup.sh` to find config files [spark]
via GitHub
Re: [PR] [SPARK-55637][SQL][TESTS] Generalize `postgres-krb-setup.sh` to find config files [spark]
via GitHub
Re: [PR] [SPARK-55637][SQL][TESTS] Generalize `postgres-krb-setup.sh` to find config files [spark]
via GitHub
[PR] [to-do] Add detailed errors in case of deduplication of invalid columns [spark]
via GitHub
Re: [PR] [SPARK-55636][CONNECT] Add detailed errors in case of deduplication of invalid columns [spark]
via GitHub
Re: [PR] [SPARK-55636][CONNECT] Add detailed errors in case of deduplication of invalid columns [spark]
via GitHub
Re: [PR] [SPARK-55636][CONNECT] Add detailed errors in case of deduplication of invalid columns [spark]
via GitHub
Re: [PR] [SPARK-55636][CONNECT] Add detailed errors in case of deduplication of invalid columns [spark]
via GitHub
Re: [PR] [SPARK-55636][CONNECT] Add detailed errors in case of deduplication of invalid columns [spark]
via GitHub
[PR] [SPARK-55635] Improve `create_spark_jira.py` to support TYPE parameter `-t` [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55635] Improve `create_spark_jira.py` to support TYPE parameter `-t` [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55635] Improve `create_spark_jira.py` to support TYPE parameter `-t` [spark-connect-swift]
via GitHub
[PR] [DO-NOT-MERGE] Rewrite spark.catalog in SQL DDL formatted text [spark]
via GitHub
Re: [PR] [DO-NOT-MERGE] Rewrite spark.catalog in SQL DDL formatted text [spark]
via GitHub
[PR] [SPARK-55634] Use `Mermaid Diagram` for `Application State Transition` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55634] Use `Mermaid` for `(Application|Cluster) State Transition` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55634] Use `Mermaid` for `(Application|Cluster) State Transition` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55634] Use `Mermaid` for `(Application|Cluster) State Transition` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-54324] Add support for client-user-context-extensions [spark]
via GitHub
Re: [PR] [SPARK-54324] Add support for client-user-context-extensions [spark]
via GitHub
Re: [PR] Jessie.luo date/testing check [spark]
via GitHub
Re: [PR] Jessie.luo date/testing check [spark]
via GitHub
[PR] [SPARK-55633] Improve `create_spark_jira.py` to support TYPE parameter `-p` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55633] Improve `create_spark_jira.py` to support TYPE parameter `-p` [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55633] Improve `create_spark_jira.py` to support TYPE parameter `-t` [spark-kubernetes-operator]
via GitHub
[PR] [SPARK-55632] Upgrade `gRPC Swift NIO Transport` to 2.4.2 [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55632] Upgrade `gRPC Swift NIO Transport` to 2.4.2 [spark-connect-swift]
via GitHub
Re: [PR] [SPARK-55632] Upgrade `gRPC Swift NIO Transport` to 2.4.2 [spark-connect-swift]
via GitHub
[PR] [WIP] Fixing Data Source API docs bug [spark]
via GitHub
Re: [PR] [WIP] Fixing Data Source API docs bug [spark]
via GitHub
[I] Catalyst optimizer non-convergence with iterative withColumn rewrite + filter pushdown in Spark [spark]
via GitHub
[I] [PYTHON] Support path-based table reference in `DataFrame.mergeInto` [spark]
via GitHub
Re: [I] [PYTHON] Support path-based table reference in `DataFrame.mergeInto` [spark]
via GitHub
Re: [I] [PYTHON] Support path-based table reference in `DataFrame.mergeInto` [spark]
via GitHub
[PR] [SPARK-55627][PYTHON][TESTS] Put CustomChannelBuilder inside should_test_connect [spark]
via GitHub
Re: [PR] [SPARK-55627][PYTHON][TESTS] Put CustomChannelBuilder inside should_test_connect [spark]
via GitHub
Re: [PR] [SPARK-55627][PYTHON][TESTS] Put CustomChannelBuilder inside should_test_connect [spark]
via GitHub
[PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
[PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-55626][SQL] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-55626][SQL][4.1] Don't load metadata columns on Table unless needed in V2TableUtil [spark]
via GitHub
Re: [PR] [SPARK-52621][SQL] Cast TIME to/from VARIANT [spark]
via GitHub
Re: [PR] [SPARK-52621][SQL] Cast TIME to/from VARIANT [spark]
via GitHub
Re: [PR] [SPARK-54003][SQL] Use the staging directory as the output path then move to final path. [spark]
via GitHub
Re: [PR] [SPARK-54003][SQL] Use the staging directory as the output path then move to final path. [spark]
via GitHub
Re: [PR] [SPARK-54165][CONNECT]Add BatchExecutePlan RPC and reattach support [spark]
via GitHub
Re: [PR] [SPARK-54165][CONNECT]Add BatchExecutePlan RPC and reattach support [spark]
via GitHub
[I] Suggestion: reference WFGY Problem Map (RAG / LLM debugging checklist) for Spark + LLM workloads [spark]
via GitHub
[PR] [DRAFT][SPARK][Geo][SQL] Refactor WKT serialization in GeometryModel [spark]
via GitHub
Re: [PR] [SPARK-55638][SPARK][Geo][SQL] Refactor WKT serialization in GeometryModel [spark]
via GitHub
Re: [PR] [SPARK-55638][SPARK][Geo][SQL] Refactor WKT serialization in GeometryModel [spark]
via GitHub
Re: [PR] [SPARK-54449][CORE] Storage Memory off heap size should be considered only when off heap is enabled [spark]
via GitHub
Re: [PR] [SPARK-54449][CORE] Storage Memory off heap size should be considered only when off heap is enabled [spark]
via GitHub
Re: [PR] [SPARK-54449][CORE] Storage Memory off heap size should be considered only when off heap is enabled [spark]
via GitHub
Re: [PR] [SPARK-54449][CORE] Storage Memory off heap size should be considered only when off heap is enabled [spark]
via GitHub
Re: [PR] [SPARK-54449][CORE] Storage Memory off heap size should be considered only when off heap is enabled [spark]
via GitHub
[PR] [SPARK-55625][PS] Fix StringOps to make `str` dtype work properly [spark]
via GitHub
Re: [PR] [SPARK-55625][PS] Fix StringOps to make `str` dtype work properly [spark]
via GitHub
Re: [PR] [SPARK-55625][PS] Fix StringOps to make `str` dtype work properly [spark]
via GitHub
Re: [PR] [SPARK-55625][PS] Fix StringOps to make `str` dtype work properly [spark]
via GitHub
[PR] [SPARK-55624][PS][TESTS] Ignore ArrowDtype in tests with pandas 3 [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS] Ignore ArrowDtype in tests with pandas 3 [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS] Ignore ArrowDtype in tests with pandas 3 [spark]
via GitHub
Re: [PR] [SPARK-55624][PS][TESTS] Ignore ArrowDtype in tests with pandas 3 [spark]
via GitHub
[PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
Re: [PR] [SPARK-55623] Add granular restart control with consecutive failure tracking [spark-kubernetes-operator]
via GitHub
[PR] [SPARK-55622][SQL][TESTS] Add test for DSV2 Metadata Tables on SessionCatalog [spark]
via GitHub
Re: [PR] [SPARK-55622][SQL][TESTS] Add test for DSV2 Metadata Tables on SessionCatalog [spark]
via GitHub
Re: [PR] [SPARK-55622][SQL][TESTS] Add test for DSV2 Metadata Tables on SessionCatalog [spark]
via GitHub
Re: [PR] [SPARK-55622][SQL][TESTS] Add test for DSV2 Metadata Tables on SessionCatalog [spark]
via GitHub
Re: [PR] [SPARK-55622][SQL][TESTS] Add test for DSV2 Metadata Tables on SessionCatalog [spark]
via GitHub
Re: [PR] [SPARK-55622][SQL][TESTS] Add test for DSV2 Metadata Tables on SessionCatalog [spark]
via GitHub
Re: [PR] [SPARK-55622][SQL][TESTS] Add test for DSV2 Metadata Tables on SessionCatalog [spark]
via GitHub
Earlier messages
Later messages