beliefer commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974672469
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##
@@ -303,12 +303,27 @@ private case class PostgresDialect()
class PostgresSQLBuilder
beliefer commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974672789
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##
@@ -303,12 +303,27 @@ private case class PostgresDialect()
class PostgresSQLBuilder
yaooqinn commented on code in PR #50074:
URL: https://github.com/apache/spark/pull/50074#discussion_r1974620691
##
sql/core/src/test/resources/sql-tests/inputs/describe.sql:
##
@@ -122,6 +122,12 @@ DESC TABLE EXTENDED e;
DESC FORMATTED e;
+CREATE TABLE f PARTITIONED BY (B,
yaooqinn commented on code in PR #50074:
URL: https://github.com/apache/spark/pull/50074#discussion_r1974624507
##
sql/core/src/test/resources/sql-tests/results/describe.sql.out:
##
@@ -890,6 +890,48 @@ a string
CONCAT('a\n b\n ', 'c\n
LuciferYang commented on PR #50105:
URL: https://github.com/apache/spark/pull/50105#issuecomment-2689564664
Merged into master and branch-4.0. Thanks @HyukjinKwon and @the-sakthi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu
cloud-fan commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974712264
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/PostgresIntegrationSuite.scala:
##
@@ -304,11 +309,25 @@ class PostgresIntegrationSu
dongjoon-hyun closed pull request #160: [SPARK-51352] Use Spark 3.5.5 in E2E
tests
URL: https://github.com/apache/spark-kubernetes-operator/pull/160
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
dongjoon-hyun closed pull request #159: [SPARK-51347] Enable Ingress and
Service Support for Spark Driver
URL: https://github.com/apache/spark-kubernetes-operator/pull/159
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
dongjoon-hyun commented on PR #159:
URL:
https://github.com/apache/spark-kubernetes-operator/pull/159#issuecomment-2689793279
BTW, I noticed that this is a first commit with a different email, @jiangzho
. Are you going to use this one?
```
$ git log --author=zh | grep 'Author:' | sort
cloud-fan closed pull request #50101: [SPARK-49756][SQL][FOLLOWUP] Use correct
pgsql datetime fields when pushing down EXTRACT
URL: https://github.com/apache/spark/pull/50101
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and us
HeartSaVioR opened a new pull request, #50110:
URL: https://github.com/apache/spark/pull/50110
### What changes were proposed in this pull request?
This PR proposes to fix the logic of serializer in TWS PySpark version to
NOT materialize the output entirely. This PR changes the logic
cloud-fan commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974701969
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##
@@ -303,12 +303,27 @@ private case class PostgresDialect()
class PostgresSQLBuilde
dongjoon-hyun commented on PR #80:
URL: https://github.com/apache/spark-docker/pull/80#issuecomment-2689706653
Thank you, @viirya , @pan3793 , @HyukjinKwon . All tests passed.
Merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please
HeartSaVioR commented on PR #50110:
URL: https://github.com/apache/spark/pull/50110#issuecomment-2689623997
I'm going to provide the branch I was used to test this behavior. I'll
update the PR description once it is ready.
--
This is an automated message from the Apache Git Service.
To re
yaooqinn commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974707704
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##
@@ -303,12 +303,27 @@ private case class PostgresDialect()
class PostgresSQLBuilder
dongjoon-hyun closed pull request #80: [SPARK-51335] Publish Apache Spark 3.5.5
to docker registry
URL: https://github.com/apache/spark-docker/pull/80
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
dongjoon-hyun commented on PR #159:
URL:
https://github.com/apache/spark-kubernetes-operator/pull/159#issuecomment-2689773484
Thank you for making a PR, @jiangzho .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
dongjoon-hyun opened a new pull request, #160:
URL: https://github.com/apache/spark-kubernetes-operator/pull/160
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
dongjoon-hyun commented on PR #160:
URL:
https://github.com/apache/spark-kubernetes-operator/pull/160#issuecomment-2689782709
Thank you, @viirya .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon opened a new pull request, #50111:
URL: https://github.com/apache/spark/pull/50111
### What changes were proposed in this pull request?
This PR is a followup of https://github.com/apache/spark/pull/50096 that
reverts unrelated changes and mark mapInPandas/mapInArrow batche
wengh commented on code in PR #49961:
URL: https://github.com/apache/spark/pull/49961#discussion_r1974189026
##
python/pyspark/sql/tests/test_python_datasource.py:
##
@@ -246,6 +248,137 @@ def reader(self, schema) -> "DataSourceReader":
assertDataFrameEqual(df, [Row(x=0
ueshin commented on PR #50093:
URL: https://github.com/apache/spark/pull/50093#issuecomment-2688893677
Thanks! merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
the-sakthi commented on code in PR #50107:
URL: https://github.com/apache/spark/pull/50107#discussion_r1974290564
##
core/src/main/scala/org/apache/spark/ui/ConsoleProgressBar.scala:
##
@@ -121,5 +121,8 @@ private[spark] class ConsoleProgressBar(sc: SparkContext)
extends Loggin
ueshin closed pull request #50094: [SPARK-51326][CONNECT][4.0] Remove
LazyExpression proto message
URL: https://github.com/apache/spark/pull/50094
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
the-sakthi commented on PR #50102:
URL: https://github.com/apache/spark/pull/50102#issuecomment-2689200019
I think we could add some tests here to verify the change in the response?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Gi
harshmotw-db opened a new pull request, #50108:
URL: https://github.com/apache/spark/pull/50108
### What changes were proposed in this pull request?
Prior to this PR, it was difficult to evaluate tests where the resulting
DataFrame would contain both "null" strings and null va
MaxGekk commented on PR #50103:
URL: https://github.com/apache/spark/pull/50103#issuecomment-2688568140
@srielau Please, have a look at the PR when you have time.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
szehon-ho opened a new pull request, #50109:
URL: https://github.com/apache/spark/pull/50109
### What changes were proposed in this pull request?
Implement Show Procedures
### Why are the changes needed?
Following SPARK-44167, we want to be able to list stored procedures
yaooqinn commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974639316
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##
@@ -303,12 +303,27 @@ private case class PostgresDialect()
class PostgresSQLBuilder
beliefer commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974717419
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##
@@ -303,12 +303,24 @@ private case class PostgresDialect()
class PostgresSQLBuilder
cloud-fan commented on code in PR #50074:
URL: https://github.com/apache/spark/pull/50074#discussion_r1974717022
##
sql/core/src/test/resources/sql-tests/results/describe.sql.out:
##
@@ -890,6 +890,48 @@ a string
CONCAT('a\n b\n ', 'c\n
HeartSaVioR commented on PR #50110:
URL: https://github.com/apache/spark/pull/50110#issuecomment-2689650113
Also updated the PR description to explain how I figured out the issue and
how I verified the fix.
--
This is an automated message from the Apache Git Service.
To respond to the mes
HeartSaVioR commented on PR #50110:
URL: https://github.com/apache/spark/pull/50110#issuecomment-2689650332
cc. @HyukjinKwon Would you mind taking a look? Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
beliefer commented on code in PR #50101:
URL: https://github.com/apache/spark/pull/50101#discussion_r1974672469
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##
@@ -303,12 +303,27 @@ private case class PostgresDialect()
class PostgresSQLBuilder
dongjoon-hyun commented on PR #83:
URL: https://github.com/apache/spark-docker/pull/83#issuecomment-2689667057
Thank you all!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
cloud-fan closed pull request #50025: [SPARK-51270][SQL] Support UUID type in
Variant
URL: https://github.com/apache/spark/pull/50025
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific com
yaooqinn merged PR #83:
URL: https://github.com/apache/spark-docker/pull/83
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: reviews-unsubscr...@spark.ap
cloud-fan commented on PR #50025:
URL: https://github.com/apache/spark/pull/50025#issuecomment-2689557570
thanks, merging to master/4.0!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
LuciferYang closed pull request #50105: [SPARK-51339][BUILD] Remove
`IllegalImportsChecker` for `s.c.Seq/IndexedSeq` from `scalastyle-config.xml`
URL: https://github.com/apache/spark/pull/50105
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
LuciferYang commented on PR #50105:
URL: https://github.com/apache/spark/pull/50105#issuecomment-2689842390
Thank you @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HeartSaVioR commented on code in PR #50110:
URL: https://github.com/apache/spark/pull/50110#discussion_r1974719696
##
python/pyspark/sql/pandas/serializers.py:
##
@@ -1223,8 +1223,17 @@ def dump_stream(self, iterator, stream):
Read through an iterator of (iterator of pa
cloud-fan commented on code in PR #50108:
URL: https://github.com/apache/spark/pull/50108#discussion_r1974888409
##
sql/core/src/test/scala/org/apache/spark/sql/QueryTest.scala:
##
@@ -326,7 +326,13 @@ object QueryTest extends Assertions {
// For binary arrays, we convert i
yaooqinn commented on code in PR #50074:
URL: https://github.com/apache/spark/pull/50074#discussion_r1974946328
##
sql/core/src/test/resources/sql-tests/results/describe.sql.out:
##
@@ -890,6 +890,48 @@ a string
CONCAT('a\n b\n ', 'c\n
wayneguow closed pull request #50075: [WIP][SPARK-51308][CONNECT][BUILD] Update
the relocation rules for the `connect` module in `SparkBuild.scala` to ensure
that both Maven and SBT produce the assembly JAR according to the same rules
URL: https://github.com/apache/spark/pull/50075
--
This i
the-sakthi commented on code in PR #50103:
URL: https://github.com/apache/spark/pull/50103#discussion_r1974366151
##
common/utils/src/main/resources/error/error-conditions.json:
##
@@ -6190,6 +6190,12 @@
],
"sqlState" : "42000"
},
+ "UNSUPPORTED_TIME_PRECISION" : {
ueshin commented on PR #50094:
URL: https://github.com/apache/spark/pull/50094#issuecomment-263482
Thanks! merging to branch-4.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
the-sakthi commented on PR #50103:
URL: https://github.com/apache/spark/pull/50103#issuecomment-2689165941
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscri
the-sakthi commented on PR #50105:
URL: https://github.com/apache/spark/pull/50105#issuecomment-2689144092
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsub
dongjoon-hyun commented on PR #82:
URL: https://github.com/apache/spark-docker/pull/82#issuecomment-2689182775
Thank you, @viirya !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co
dongjoon-hyun closed pull request #82: [SPARK-51344] Fix `ENV` key value format
in `*.template`
URL: https://github.com/apache/spark-docker/pull/82
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to th
gene-db commented on code in PR #50025:
URL: https://github.com/apache/spark/pull/50025#discussion_r1974322433
##
common/variant/src/main/java/org/apache/spark/types/variant/VariantBuilder.java:
##
@@ -240,6 +242,18 @@ public void appendBinary(byte[] binary) {
writePos += b
luben commented on code in PR #50057:
URL: https://github.com/apache/spark/pull/50057#discussion_r1974306557
##
core/benchmarks/ZStandardBenchmark-jdk21-results.txt:
##
@@ -2,48 +2,48 @@
Benchmark ZStandardCompressionCodec
=
ueshin commented on PR #50094:
URL: https://github.com/apache/spark/pull/50094#issuecomment-262567
The remaining test failures are not related to this PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
the-sakthi commented on PR #50100:
URL: https://github.com/apache/spark/pull/50100#issuecomment-2689210402
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscri
cashmand commented on code in PR #50025:
URL: https://github.com/apache/spark/pull/50025#discussion_r1974435292
##
common/variant/src/main/java/org/apache/spark/types/variant/VariantBuilder.java:
##
@@ -240,6 +242,18 @@ public void appendBinary(byte[] binary) {
writePos +=
jingz-db commented on code in PR #49488:
URL: https://github.com/apache/spark/pull/49488#discussion_r1974452111
##
sql/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/streaming/TransformWithStateConnectSuite.scala:
##
Review Comment:
> I've realized that you
jingz-db commented on code in PR #49488:
URL: https://github.com/apache/spark/pull/49488#discussion_r1974458075
##
sql/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/streaming/TransformWithStateConnectSuite.scala:
##
@@ -0,0 +1,489 @@
+/*
+ * Licensed to the Apac
jingz-db commented on code in PR #49488:
URL: https://github.com/apache/spark/pull/49488#discussion_r1974452111
##
sql/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/streaming/TransformWithStateConnectSuite.scala:
##
Review Comment:
> I've realized that you
cloud-fan closed pull request #50104: [SPARK-51337][SQL] Add maxRows to
CTERelationDef and CTERelationRef
URL: https://github.com/apache/spark/pull/50104
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
cloud-fan commented on PR #50104:
URL: https://github.com/apache/spark/pull/50104#issuecomment-2689469635
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
cloud-fan commented on PR #50069:
URL: https://github.com/apache/spark/pull/50069#issuecomment-2689447332
This is a test only PR and other test failures are definitely unrelated.
Thanks, merging to master/4.0!
--
This is an automated message from the Apache Git Service.
To respond to the
cloud-fan closed pull request #50069: [SPARK-51303] [SQL] [TESTS] Extend `ORDER
BY` testing coverage
URL: https://github.com/apache/spark/pull/50069
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
wengh commented on code in PR #49961:
URL: https://github.com/apache/spark/pull/49961#discussion_r1974559481
##
python/pyspark/sql/tests/test_python_datasource.py:
##
@@ -246,6 +248,137 @@ def reader(self, schema) -> "DataSourceReader":
assertDataFrameEqual(df, [Row(x=0
wengh commented on code in PR #49961:
URL: https://github.com/apache/spark/pull/49961#discussion_r1974559481
##
python/pyspark/sql/tests/test_python_datasource.py:
##
@@ -246,6 +248,137 @@ def reader(self, schema) -> "DataSourceReader":
assertDataFrameEqual(df, [Row(x=0
jingz-db commented on code in PR #49488:
URL: https://github.com/apache/spark/pull/49488#discussion_r1974088879
##
sql/connect/common/src/main/protobuf/spark/connect/relations.proto:
##
Review Comment:
Yes, Scala & Python is sharing the same connect client protocol.
--
HyukjinKwon closed pull request #50100: [SPARK-51278][FOLLOWUP][DOCS] Update
JSON format from documentation
URL: https://github.com/apache/spark/pull/50100
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #50100:
URL: https://github.com/apache/spark/pull/50100#issuecomment-2689394019
Merged to master and branch-4.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
github-actions[bot] closed pull request #48878: Dev/milast/recurisve cte
URL: https://github.com/apache/spark/pull/48878
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsu
dongjoon-hyun commented on PR #80:
URL: https://github.com/apache/spark-docker/pull/80#issuecomment-2689111825
Thank you for the pointer, @pan3793 . Let me try in this PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and us
dongjoon-hyun opened a new pull request, #82:
URL: https://github.com/apache/spark-docker/pull/82
### What changes were proposed in this pull request?
This PR aims to fix `ENV` key value format in `*.template`.
### Why are the changes needed?
To follow the Docker guidelin
the-sakthi commented on PR #50052:
URL: https://github.com/apache/spark/pull/50052#issuecomment-2689223159
Hello @chenhao-db thanks for the changes, could we please get some
description in the jira and the response to the PR template quetsions above?
Helps a lot with understanding the conte
dongjoon-hyun opened a new pull request, #83:
URL: https://github.com/apache/spark-docker/pull/83
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
dongjoon-hyun commented on PR #83:
URL: https://github.com/apache/spark-docker/pull/83#issuecomment-2689220984
cc @Yikun , @yaooqinn , @LuciferYang , @yaooqinn , @itholic , @viirya
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
the-sakthi commented on PR #50042:
URL: https://github.com/apache/spark/pull/50042#issuecomment-2689227610
Nice. LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To uns
wengh commented on code in PR #49961:
URL: https://github.com/apache/spark/pull/49961#discussion_r1974189473
##
python/pyspark/sql/tests/test_python_datasource.py:
##
@@ -246,6 +248,137 @@ def reader(self, schema) -> "DataSourceReader":
assertDataFrameEqual(df, [Row(x=0
dongjoon-hyun commented on PR #83:
URL: https://github.com/apache/spark-docker/pull/83#issuecomment-2689239268
Thank you, @viirya . No~ AFAIK, the published docker images will exist like
the published Maven jars.
--
This is an automated message from the Apache Git Service.
To respond to t
ueshin closed pull request #50093: [SPARK-51326][CONNECT] Remove LazyExpression
proto message
URL: https://github.com/apache/spark/pull/50093
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
ueshin commented on PR #50093:
URL: https://github.com/apache/spark/pull/50093#issuecomment-2688892681
I reran the compatibility test after #50094 was merged and it passed.
-
https://github.com/ueshin/apache-spark/actions/runs/13554631961/job/37946210197
--
This is an automated messag
jiangzho opened a new pull request, #159:
URL: https://github.com/apache/spark-kubernetes-operator/pull/159
### What changes were proposed in this pull request?
This PR adds support for launching Ingress and Services with Spark
Applications.
### Why are the changes
cloud-fan closed pull request #50088: [SPARK-51322][SQL] Better error message
for streaming subquery expression
URL: https://github.com/apache/spark/pull/50088
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
cloud-fan commented on PR #50088:
URL: https://github.com/apache/spark/pull/50088#issuecomment-2687357950
thanks for the review, merging to master/4.0!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
cloud-fan opened a new pull request, #50101:
URL: https://github.com/apache/spark/pull/50101
### What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/48210 to fix
correctness issues caused by pgsql filter pushdown. These d
cloud-fan commented on PR #50101:
URL: https://github.com/apache/spark/pull/50101#issuecomment-2687368062
cc @beliefer @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comme
cloud-fan commented on code in PR #50040:
URL: https://github.com/apache/spark/pull/50040#discussion_r1973251531
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -5545,6 +5545,15 @@ object SQLConf {
.booleanConf
.createWithDefault(false
cloud-fan commented on PR #49678:
URL: https://github.com/apache/spark/pull/49678#issuecomment-2687493241
thanks, merging to master/4.0!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
cloud-fan commented on code in PR #49726:
URL: https://github.com/apache/spark/pull/49726#discussion_r1973284907
##
sql/core/src/test/scala/org/apache/spark/sql/scripting/SqlScriptingExecutionSuite.scala:
##
@@ -69,6 +70,222 @@ class SqlScriptingExecutionSuite extends QueryTest
dongjoon-hyun opened a new pull request, #80:
URL: https://github.com/apache/spark-docker/pull/80
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
cloud-fan commented on code in PR #49678:
URL: https://github.com/apache/spark/pull/49678#discussion_r1973281569
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -2721,6 +2721,25 @@ class DataFrameSuite extends QueryTest
parameters = Map("name"
cloud-fan closed pull request #49678: [SPARK-50994][CORE] Perform RDD
conversion under tracked execution
URL: https://github.com/apache/spark/pull/49678
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun opened a new pull request, #81:
URL: https://github.com/apache/spark-docker/pull/81
…
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
cloud-fan commented on PR #50053:
URL: https://github.com/apache/spark/pull/50053#issuecomment-2687505463
thanks, merging to master/4.0!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
cloud-fan closed pull request #50053: [SPARK-51310][SQL] Resolve the type of
default string producing expressions
URL: https://github.com/apache/spark/pull/50053
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
beliefer commented on code in PR #50107:
URL: https://github.com/apache/spark/pull/50107#discussion_r1973643904
##
core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:
##
@@ -300,11 +300,7 @@ object BarrierTaskContext {
@Since("2.4.0")
def get(): BarrierTaskConte
jjayadeep06 commented on code in PR #50020:
URL: https://github.com/apache/spark/pull/50020#discussion_r1973705329
##
core/src/main/scala/org/apache/spark/BarrierCoordinator.scala:
##
@@ -122,23 +124,40 @@ private[spark] class BarrierCoordinator(
// Init a TimerTask for a b
cloud-fan commented on code in PR #50074:
URL: https://github.com/apache/spark/pull/50074#discussion_r1973627334
##
sql/core/src/test/resources/sql-tests/inputs/describe.sql:
##
@@ -122,6 +122,12 @@ DESC TABLE EXTENDED e;
DESC FORMATTED e;
+CREATE TABLE f PARTITIONED BY (B,
mihailoale-db commented on PR #50069:
URL: https://github.com/apache/spark/pull/50069#issuecomment-2688377111
@MaxGekk @cloud-fan Failures don't seem related to changes? Please check
when you have time. Thanks
--
This is an automated message from the Apache Git Service.
To respond to the
dongjoon-hyun merged PR #81:
URL: https://github.com/apache/spark-docker/pull/81
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: reviews-unsubscr...@spa
dongjoon-hyun commented on PR #81:
URL: https://github.com/apache/spark-docker/pull/81#issuecomment-2688408435
Thank you, @viirya !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co
cashmand commented on code in PR #50025:
URL: https://github.com/apache/spark/pull/50025#discussion_r1973900282
##
common/variant/src/main/java/org/apache/spark/types/variant/VariantBuilder.java:
##
@@ -240,6 +242,19 @@ public void appendBinary(byte[] binary) {
writePos +=
viirya commented on PR #80:
URL: https://github.com/apache/spark-docker/pull/80#issuecomment-2688494398
This error happened (even after re-triggering):
```
54.80 qemu: uncaught target signal 11 (Segmentation fault) - core dumped
55.22 Segmentation fault (core dumped)
55.27 qem
1 - 100 of 135 matches
Mail list logo