HyukjinKwon closed pull request #48419:
[SPARK-49927][SS][PYTHON][TESTS][FOLLOW-UP] Fixes `q.lastProgress.batchId` to
`q.lastProgress.progress.batchId`
URL: https://github.com/apache/spark/pull/48419
--
This is an automated message from the Apache Git Service.
To respond to the message, plea
HyukjinKwon commented on PR #48419:
URL: https://github.com/apache/spark/pull/48419#issuecomment-2406663394
I am going to merge this to fix up the build.
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HyukjinKwon opened a new pull request, #48419:
URL: https://github.com/apache/spark/pull/48419
### What changes were proposed in this pull request?
This PR is a followup of https://github.com/apache/spark/pull/48414 that
fixes `q.lastProgress.batchId` -> `q.lastProgress.progress.batch
zml1206 commented on code in PR #48300:
URL: https://github.com/apache/spark/pull/48300#discussion_r1796485180
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/PropagateEmptyRelation.scala:
##
@@ -111,7 +111,8 @@ abstract class PropagateEmptyRelationBase ex
zml1206 commented on code in PR #48300:
URL: https://github.com/apache/spark/pull/48300#discussion_r1796483876
##
sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala:
##
@@ -2829,6 +2829,38 @@ class AdaptiveQueryExecSuite
assert(fi
cloud-fan commented on code in PR #48300:
URL: https://github.com/apache/spark/pull/48300#discussion_r1796469594
##
sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala:
##
@@ -2829,6 +2829,38 @@ class AdaptiveQueryExecSuite
assert(
cloud-fan commented on code in PR #48300:
URL: https://github.com/apache/spark/pull/48300#discussion_r1796468910
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/PropagateEmptyRelation.scala:
##
@@ -111,7 +111,8 @@ abstract class PropagateEmptyRelationBase
LuciferYang commented on PR #48403:
URL: https://github.com/apache/spark/pull/48403#issuecomment-2406607712
> FWIW, I think it's gonna fix
https://github.com/apache/spark/actions/runs/11259624487/job/31309026637 too
https://github.com/user-attachments/assets/1c6dcc25-ad23-43e9-b90b-7c
pan3793 commented on PR #47381:
URL: https://github.com/apache/spark/pull/47381#issuecomment-2406604186
Is the fix proposed here partially fixed by SPARK-49352? also cc @viirya
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
yaooqinn commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796446276
##
sql/core/src/test/resources/sql-tests/analyzer-results/null-handling.sql.out:
##
@@ -69,6 +69,24 @@ Project [a#x, (b#x + c#x) AS (b + c)#x]
+- Relation spark_ca
yaooqinn commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796444211
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ReorderAssociativeOperatorSuite.scala:
##
@@ -74,4 +74,33 @@ class ReorderAssociativeOperatorSui
yaooqinn commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796443103
##
sql/core/src/test/resources/sql-tests/analyzer-results/null-handling.sql.out:
##
@@ -69,6 +69,24 @@ Project [a#x, (b#x + c#x) AS (b + c)#x]
+- Relation spark_ca
cloud-fan commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796429306
##
sql/core/src/test/resources/sql-tests/analyzer-results/null-handling.sql.out:
##
@@ -69,6 +69,24 @@ Project [a#x, (b#x + c#x) AS (b + c)#x]
+- Relation spark_c
yaooqinn commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796428537
##
sql/core/src/test/resources/sql-tests/analyzer-results/null-handling.sql.out:
##
@@ -69,6 +69,24 @@ Project [a#x, (b#x + c#x) AS (b + c)#x]
+- Relation spark_ca
cloud-fan commented on code in PR #48398:
URL: https://github.com/apache/spark/pull/48398#discussion_r1796426992
##
mllib/src/main/scala/org/apache/spark/ml/util/SchemaUtils.scala:
##
@@ -213,11 +216,17 @@ private[spark] object SchemaUtils {
*/
def getSchemaField(schema:
cloud-fan commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796426171
##
sql/core/src/test/resources/sql-tests/analyzer-results/null-handling.sql.out:
##
@@ -69,6 +69,24 @@ Project [a#x, (b#x + c#x) AS (b + c)#x]
+- Relation spark_c
cloud-fan commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796425812
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ReorderAssociativeOperatorSuite.scala:
##
@@ -74,4 +74,33 @@ class ReorderAssociativeOperatorSu
neilramaswamy commented on code in PR #48297:
URL: https://github.com/apache/spark/pull/48297#discussion_r1796403387
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinExec.scala:
##
@@ -668,13 +668,38 @@ case class StreamingSymmetricHa
anishshri-db commented on PR #48418:
URL: https://github.com/apache/spark/pull/48418#issuecomment-2406548634
cc - @HeartSaVioR @WweiL - PTAL, thx !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
anishshri-db opened a new pull request, #48418:
URL: https://github.com/apache/spark/pull/48418
### What changes were proposed in this pull request?
Ensure that socket updates are flushed on exception from the python worker
### Why are the changes needed?
Without this, update
xinrong-meng commented on PR #48180:
URL: https://github.com/apache/spark/pull/48180#issuecomment-2406544218
```
[info] - interrupt all - background queries, foreground interrupt *** FAILED
*** (20 seconds, 50 milliseconds)
[info] The code passed to eventually never returned normally
xinrong-meng commented on PR #48415:
URL: https://github.com/apache/spark/pull/48415#issuecomment-2406542424
We may later port those expected_fig_data dictionaries to a separate JSON
file for easier auditing if the number of tests increases
--
This is an automated message from the Apache
xinrong-meng commented on PR #48415:
URL: https://github.com/apache/spark/pull/48415#issuecomment-2406540695
cc @zhengruifeng @HyukjinKwon would you please review thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
xinrong-meng commented on code in PR #48415:
URL: https://github.com/apache/spark/pull/48415#discussion_r1796413066
##
python/pyspark/sql/tests/plot/test_frame_plot_plotly.py:
##
@@ -48,79 +48,174 @@ def sdf3(self):
columns = ["sales", "signups", "visits", "date"]
xinrong-meng commented on PR #48415:
URL: https://github.com/apache/spark/pull/48415#issuecomment-2406535987
Irrelevant tests failed, retriggering:
```
ERROR [3.661s]: test_listener_events
(pyspark.sql.tests.streaming.test_streaming_listener.StreamingListenerTests.test_listener_events)
yaooqinn commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796387845
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:
##
@@ -260,19 +260,32 @@ object ReorderAssociativeOperator extends
Rule[Logi
HyukjinKwon commented on PR #48403:
URL: https://github.com/apache/spark/pull/48403#issuecomment-2406490863
Yeah, I think it's gonna fix
https://github.com/apache/spark/actions/runs/11259624487/job/31309026637 too
--
This is an automated message from the Apache Git Service.
To respond to
yaooqinn commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796381306
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:
##
@@ -260,19 +260,32 @@ object ReorderAssociativeOperator extends
Rule[Logi
HyukjinKwon closed pull request #48417: [SPARK-48567][PYTHON][TESTS][FOLLOW-UP]
Make the query scope higher so finally can access to it
URL: https://github.com/apache/spark/pull/48417
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitH
HyukjinKwon commented on PR #48417:
URL: https://github.com/apache/spark/pull/48417#issuecomment-2406489014
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on PR #48417:
URL: https://github.com/apache/spark/pull/48417#issuecomment-2406488944
Will merge this to fix up the build. Otherwise, I will revert this and
https://github.com/apache/spark/commit/2af653688c20dde87eebaa6bd4dc21123fab74cc
if it still fails.
--
This is
panbingkun commented on PR #48416:
URL: https://github.com/apache/spark/pull/48416#issuecomment-2406488740
cc @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To u
panbingkun commented on PR #48416:
URL: https://github.com/apache/spark/pull/48416#issuecomment-2406488645
I'm not sure if there will be similar uses in the future, because in our
code, it's not mandatory not to use `getErrorClass`.
--
This is an automated message from the Apache Git Serv
panbingkun commented on code in PR #48416:
URL: https://github.com/apache/spark/pull/48416#discussion_r1796379400
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala:
##
@@ -1003,7 +1003,7 @@ class QueryCompilationErrorsSuite
val exc
LuciferYang commented on PR #48403:
URL: https://github.com/apache/spark/pull/48403#issuecomment-2406473170
After merging this PR, Maven daily test has been restored:
- Java 17: https://github.com/apache/spark/actions/runs/11274643792
https://github.com/user-attachments/assets/5
cloud-fan commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796369501
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:
##
@@ -260,19 +260,32 @@ object ReorderAssociativeOperator extends
Rule[Log
cloud-fan commented on code in PR #48395:
URL: https://github.com/apache/spark/pull/48395#discussion_r1796369501
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:
##
@@ -260,19 +260,32 @@ object ReorderAssociativeOperator extends
Rule[Log
panbingkun opened a new pull request, #48416:
URL: https://github.com/apache/spark/pull/48416
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
No.
### How was t
fusheng-rd commented on PR #47418:
URL: https://github.com/apache/spark/pull/47418#issuecomment-2406448334
Please help review it when you have free time, thanks! @ulysses-you
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please
yaooqinn commented on PR #48395:
URL: https://github.com/apache/spark/pull/48395#issuecomment-2406409291
cc @cloud-fan @dongjoon-hyun thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
zml1206 commented on PR #48300:
URL: https://github.com/apache/spark/pull/48300#issuecomment-2406402629
cc @cloud-fan Can you help take a look, thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
cloud-fan commented on code in PR #48413:
URL: https://github.com/apache/spark/pull/48413#discussion_r1796325607
##
sql/core/src/test/resources/sql-tests/results/pipe-operators.sql.out:
##
@@ -1673,6 +1691,279 @@ org.apache.spark.sql.catalyst.ExtendedAnalysisException
}
+--
cloud-fan commented on code in PR #48413:
URL: https://github.com/apache/spark/pull/48413#discussion_r1796325607
##
sql/core/src/test/resources/sql-tests/results/pipe-operators.sql.out:
##
@@ -1673,6 +1691,279 @@ org.apache.spark.sql.catalyst.ExtendedAnalysisException
}
+--
cloud-fan commented on code in PR #48413:
URL: https://github.com/apache/spark/pull/48413#discussion_r1796324933
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -1010,21 +1018,40 @@ class AstBuilder extends DataTypeAstBuilder
//
cloud-fan commented on code in PR #48413:
URL: https://github.com/apache/spark/pull/48413#discussion_r1796324638
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -1010,21 +1018,40 @@ class AstBuilder extends DataTypeAstBuilder
//
cloud-fan commented on code in PR #48413:
URL: https://github.com/apache/spark/pull/48413#discussion_r1796323209
##
sql/core/src/test/resources/sql-tests/results/pipe-operators.sql.out:
##
@@ -1673,6 +1691,279 @@ org.apache.spark.sql.catalyst.ExtendedAnalysisException
}
+--
cloud-fan commented on code in PR #48413:
URL: https://github.com/apache/spark/pull/48413#discussion_r1796322574
##
sql/core/src/test/resources/sql-tests/inputs/pipe-operators.sql:
##
@@ -571,6 +583,95 @@ table t
table t
|> union all table st;
+-- Sorting and repartitioning
xinrong-meng opened a new pull request, #48415:
URL: https://github.com/apache/spark/pull/48415
### What changes were proposed in this pull request?
Refactor plot-related unit tests.
### Why are the changes needed?
Different plots have different key attributes of the resulting fi
HyukjinKwon closed pull request #48414: [SPARK-49927][SS]
pyspark.sql.tests.streaming.test_streaming_listener to wait longer
URL: https://github.com/apache/spark/pull/48414
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on PR #48414:
URL: https://github.com/apache/spark/pull/48414#issuecomment-2406380975
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on code in PR #48414:
URL: https://github.com/apache/spark/pull/48414#discussion_r1796294336
##
python/pyspark/sql/tests/streaming/test_streaming_listener.py:
##
@@ -381,7 +381,8 @@ def verify(test_listener):
.start()
)
HyukjinKwon commented on code in PR #48414:
URL: https://github.com/apache/spark/pull/48414#discussion_r1796294336
##
python/pyspark/sql/tests/streaming/test_streaming_listener.py:
##
@@ -381,7 +381,8 @@ def verify(test_listener):
.start()
)
cloud-fan commented on PR #48412:
URL: https://github.com/apache/spark/pull/48412#issuecomment-2406319893
Can you re-trigger Github Action jobs?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to th
xinrong-meng commented on PR #48180:
URL: https://github.com/apache/spark/pull/48180#issuecomment-2406309866
Retriggered irrelevant tests
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the speci
cloud-fan commented on code in PR #48410:
URL: https://github.com/apache/spark/pull/48410#discussion_r1796273894
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/OptimizerSuite.scala:
##
@@ -313,4 +313,23 @@ class OptimizerSuite extends PlanTest {
as
panbingkun commented on code in PR #48224:
URL: https://github.com/apache/spark/pull/48224#discussion_r1796272621
##
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/json/JsonExpressionUtils.java:
##
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Fo
zhengruifeng commented on code in PR #48391:
URL: https://github.com/apache/spark/pull/48391#discussion_r1796265683
##
core/src/main/scala/org/apache/spark/util/Lazy.scala:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contrib
github-actions[bot] closed pull request #47067: [SPARK-48694][CORE]Manage
memory used by external cache
URL: https://github.com/apache/spark/pull/47067
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
github-actions[bot] commented on PR #47174:
URL: https://github.com/apache/spark/pull/47174#issuecomment-2406280309
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #47182:
URL: https://github.com/apache/spark/pull/47182#issuecomment-2406280284
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] closed pull request #47036: [SPARK-48667][PYTHON] Arrow
python UDFS didn't support UDT as outputType
URL: https://github.com/apache/spark/pull/47036
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
github-actions[bot] closed pull request #47039: [SPARK-48669][K8S] K8s resource
name prefix follows `DNS Subdomain Names` rule
URL: https://github.com/apache/spark/pull/47039
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and us
github-actions[bot] closed pull request #47078: [SPARK-48696][SQL][CONNECT]
Also truncate the schema row for show function
URL: https://github.com/apache/spark/pull/47078
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use th
github-actions[bot] closed pull request #47127: [SPARK-48739][SQL] Disable
writing collated data to file formats that don't support them in non managed
tables
URL: https://github.com/apache/spark/pull/47127
--
This is an automated message from the Apache Git Service.
To respond to the messag
github-actions[bot] commented on PR #47158:
URL: https://github.com/apache/spark/pull/47158#issuecomment-2406280340
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
siying opened a new pull request, #48414:
URL: https://github.com/apache/spark/pull/48414
### What changes were proposed in this pull request?
In test pyspark.sql.tests.streaming.test_streaming_listener, instead of
waiting for fixed 10 seconds, we wait for progress made.
JoshRosen commented on code in PR #48391:
URL: https://github.com/apache/spark/pull/48391#discussion_r1796194610
##
core/src/main/scala/org/apache/spark/util/Lazy.scala:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributo
JoshRosen commented on code in PR #48391:
URL: https://github.com/apache/spark/pull/48391#discussion_r1796190664
##
core/src/main/scala/org/apache/spark/util/Lazy.scala:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributo
ahshahid commented on PR #17186:
URL: https://github.com/apache/spark/pull/17186#issuecomment-2406168502
@xyxiaoyou : for your reference: take a look at
[https://issues.apache.org/jira/browse/SPARK-33152](https://issues.apache.org/jira/browse/SPARK-33152)
and corresponding PR ( though it
harshmotw-db commented on code in PR #48379:
URL: https://github.com/apache/spark/pull/48379#discussion_r1796173788
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala:
##
@@ -700,7 +701,14 @@ case class RegExpReplace(subject: Express
magpierre opened a new pull request, #80:
URL: https://github.com/apache/spark-connect-go/pull/80
Added the skeleton for dataframe.PrintSchema() and schema.TreeString() that
can be extended with functionality pertaining to nested dataTypes once they
become available.
The feature has
gene-db commented on code in PR #48172:
URL: https://github.com/apache/spark/pull/48172#discussion_r1796151027
##
common/variant/src/main/java/org/apache/spark/types/variant/VariantBuilder.java:
##
@@ -53,17 +53,21 @@ public VariantBuilder(boolean allowDuplicateKeys) {
public
harshmotw-db commented on code in PR #48172:
URL: https://github.com/apache/spark/pull/48172#discussion_r1796071250
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala:
##
@@ -72,9 +74,30 @@ case class PartitionedFile(
}
}
+/**
+ * Class
Kimahriman commented on PR #48038:
URL: https://github.com/apache/spark/pull/48038#issuecomment-2405924462
Gentle ping @zhengruifeng @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
09306677806 commented on PR #47895:
URL: https://github.com/apache/spark/pull/47895#issuecomment-2405884753
bc1qqeysv50ayq0az93s5frfvtf2fe6rt5tfkdfx2y
#*Shahrzadmahro#
در تاریخ پنجشنبه ۱۰ اکتبر ۲۰۲۴، ۲۲:۰۴ Burak Yavuz ***@***.***>
نوشت:
> ***@***. requested changes on
brkyvz commented on code in PR #47895:
URL: https://github.com/apache/spark/pull/47895#discussion_r1795913662
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStoreRDD.scala:
##
@@ -126,6 +129,7 @@ class StateStoreRDD[T: ClassTag, U: ClassTag](
dtenedor commented on PR #48413:
URL: https://github.com/apache/spark/pull/48413#issuecomment-2405602311
cc @cloud-fan @gengliangwang this is the PR to support LIMIT/OFFSET +
sorting. There are a few more changes in the `AstBuilder` for this one but
still contained only in the parser.
--
dtenedor opened a new pull request, #48413:
URL: https://github.com/apache/spark/pull/48413
### What changes were proposed in this pull request?
This PR adds SQL pipe syntax support for LIMIT/OFFSET and
ORDER/SORT/CLUSTER/DISTRIBUTE BY.
For example:
```
CREATE TABLE t
xupefei commented on code in PR #48120:
URL: https://github.com/apache/spark/pull/48120#discussion_r1775548583
##
sql/core/src/main/scala/org/apache/spark/sql/artifact/ArtifactManager.scala:
##
@@ -67,12 +67,18 @@ class ArtifactManager(session: SparkSession) extends
Logging {
LuciferYang commented on PR #48403:
URL: https://github.com/apache/spark/pull/48403#issuecomment-2405449605
Merged into master. Thanks @hvanhovell and @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
U
LuciferYang closed pull request #48403: [SPARK-49569][BUILD][FOLLOWUP] Exclude
`spark-connect-shims` from `sql/core` module
URL: https://github.com/apache/spark/pull/48403
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
ilicmarkodb commented on code in PR #48412:
URL: https://github.com/apache/spark/pull/48412#discussion_r1795643488
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -1101,6 +1101,259 @@ class CollationSuite extends DatasourceV2SQLBase with
AdaptiveSpar
cloud-fan commented on code in PR #48412:
URL: https://github.com/apache/spark/pull/48412#discussion_r1795476158
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -1101,6 +1101,259 @@ class CollationSuite extends DatasourceV2SQLBase with
AdaptiveSparkP
cloud-fan commented on code in PR #48412:
URL: https://github.com/apache/spark/pull/48412#discussion_r1795468153
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -1101,6 +1101,259 @@ class CollationSuite extends DatasourceV2SQLBase with
AdaptiveSparkP
cloud-fan commented on code in PR #48412:
URL: https://github.com/apache/spark/pull/48412#discussion_r1795466130
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -1101,6 +1101,259 @@ class CollationSuite extends DatasourceV2SQLBase with
AdaptiveSparkP
MaxGekk commented on code in PR #48397:
URL: https://github.com/apache/spark/pull/48397#discussion_r1795415992
##
common/utils/src/main/resources/error/error-conditions.json:
##
@@ -606,6 +606,12 @@
],
"sqlState" : "42711"
},
+ "COLUMN_ARRAY_ELEMENT_TYPE_MISMATCH"
hvanhovell commented on code in PR #48390:
URL: https://github.com/apache/spark/pull/48390#discussion_r1795392918
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##
@@ -204,17 +204,6 @@ class SparkSession private(
*/
def listenerManager: ExecutionListe
LuciferYang commented on PR #48406:
URL: https://github.com/apache/spark/pull/48406#issuecomment-2405015303
Merged into master. Thanks @panbingkun @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
LuciferYang closed pull request #48406: [SPARK-49920][INFRA] Install `R` for
`ubuntu 24.04` when GA run `k8s-integration-tests`
URL: https://github.com/apache/spark/pull/48406
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
LuciferYang commented on PR #48406:
URL: https://github.com/apache/spark/pull/48406#issuecomment-2405012974
https://github.com/user-attachments/assets/c0f6ac74-abd3-49a0-aab0-f50c92574a80";>
all test passed
--
This is an automated message from the Apache Git Service.
To respond
dvorst opened a new pull request, #48411:
URL: https://github.com/apache/spark/pull/48411
The original description "PySpark requires Java 8 or later" is incorrect
since 3.5 does not support java 8 anymore and the latest supported version is
17, the downloading page however, does correctly s
chris-twiner commented on PR #34558:
URL: https://github.com/apache/spark/pull/34558#issuecomment-2404981056
> @Kimahriman just out of curiosity, how much did the performance improve?
I just wanted to add to the above response that I've implemented a
compilation scheme
[here](https:/
zhengruifeng commented on code in PR #48410:
URL: https://github.com/apache/spark/pull/48410#discussion_r1795330652
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/OptimizerSuite.scala:
##
@@ -313,4 +313,23 @@ class OptimizerSuite extends PlanTest {
zhengruifeng commented on code in PR #48410:
URL: https://github.com/apache/spark/pull/48410#discussion_r1795330652
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/OptimizerSuite.scala:
##
@@ -313,4 +313,23 @@ class OptimizerSuite extends PlanTest {
zhengruifeng commented on code in PR #48410:
URL: https://github.com/apache/spark/pull/48410#discussion_r1795330189
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/OptimizerSuite.scala:
##
@@ -313,4 +313,23 @@ class OptimizerSuite extends PlanTest {
zhengruifeng opened a new pull request, #48410:
URL: https://github.com/apache/spark/pull/48410
### What changes were proposed in this pull request?
Fix `containsNull` of `ArrayCompact`, by adding a new expression
`KnownNotContainsNull`
### Why are the changes needed?
ht
cloud-fan closed pull request #48210: [SPARK-49756][SQL] Postgres dialect
supports pushdown datetime functions.
URL: https://github.com/apache/spark/pull/48210
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
cloud-fan commented on PR #48210:
URL: https://github.com/apache/spark/pull/48210#issuecomment-2404890325
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan commented on PR #48284:
URL: https://github.com/apache/spark/pull/48284#issuecomment-2404886221
Yes please, we can discuss more on your PR later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
panbingkun opened a new pull request, #48409:
URL: https://github.com/apache/spark/pull/48409
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
1 - 100 of 132 matches
Mail list logo