beliefer commented on code in PR #49453:
URL: https://github.com/apache/spark/pull/49453#discussion_r1975469190
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/MySQLIntegrationSuite.scala:
##
@@ -241,6 +241,84 @@ class MySQLIntegrationSuite exte
sunxiaoguang commented on code in PR #49453:
URL: https://github.com/apache/spark/pull/49453#discussion_r1975473641
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/MySQLIntegrationSuite.scala:
##
@@ -241,6 +241,84 @@ class MySQLIntegrationSuite
srowen commented on code in PR #50107:
URL: https://github.com/apache/spark/pull/50107#discussion_r1975478508
##
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala:
##
@@ -169,6 +171,7 @@ private[spark] class TaskSchedulerImpl(
protected val executorIdToHo
vladimirg-db opened a new pull request, #50118:
URL: https://github.com/apache/spark/pull/50118
### What changes were proposed in this pull request?
Preserve plan change logging level for views:
- `spark.sql.planChangeLog.level`
- `spark.sql.expressionTreeChangeLog.level`
dongjoon-hyun commented on code in PR #50113:
URL: https://github.com/apache/spark/pull/50113#discussion_r1975674932
##
project/SparkBuild.scala:
##
@@ -1002,10 +1002,9 @@ object KubernetesIntegrationTests {
if (excludeTags.exists(_.equalsIgnoreCase("r"))) {
jiangzho commented on PR #159:
URL:
https://github.com/apache/spark-kubernetes-operator/pull/159#issuecomment-2691300205
Thanks for the reminder! This is a result of local squashing, I'll fix this
for following commits - will still use `jiang...@umich.edu`
--
This is an automated messag
vrozov commented on PR #49276:
URL: https://github.com/apache/spark/pull/49276#issuecomment-2691136056
@gengliangwang ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To u
HyukjinKwon commented on code in PR #49277:
URL: https://github.com/apache/spark/pull/49277#discussion_r1975249747
##
python/pyspark/sql/tests/pandas/test_pandas_transform_with_state.py:
##
@@ -1294,6 +1310,167 @@ def
test_transform_with_state_with_timers_single_partition(self)
tomscut opened a new pull request, #50116:
URL: https://github.com/apache/spark/pull/50116
### Why are the changes needed?
After SPARK-23429, the method Executor#startDriverHeartbeat has been
removed. We should update the code comment.
### Does this PR introduce any user-fa
HyukjinKwon opened a new pull request, #50117:
URL: https://github.com/apache/spark/pull/50117
### What changes were proposed in this pull request?
This PR adds the brief explanation for Spark Connect at PySpark Overview page

extends Logging
beliefer commented on PR #50107:
URL: https://github.com/apache/spark/pull/50107#issuecomment-2690560779
ping @mridulm @LuciferYang @srowen cc @jjayadeep06
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
chris-twiner commented on PR #50023:
URL: https://github.com/apache/spark/pull/50023#issuecomment-2691506401
> @chris-twiner can you fix the style issue?
yeah done, I'll keep an eye on the progress over the next hour or so incase
any restarts are needed. Odd though, no files mentione
beliefer commented on code in PR #50107:
URL: https://github.com/apache/spark/pull/50107#discussion_r1976289910
##
core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:
##
@@ -300,11 +300,7 @@ object BarrierTaskContext {
@Since("2.4.0")
def get(): BarrierTaskConte
beliefer commented on code in PR #50107:
URL: https://github.com/apache/spark/pull/50107#discussion_r1976289910
##
core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:
##
@@ -300,11 +300,7 @@ object BarrierTaskContext {
@Since("2.4.0")
def get(): BarrierTaskConte
aokolnychyi commented on code in PR #50044:
URL: https://github.com/apache/spark/pull/50044#discussion_r1976098170
##
sql/core/src/test/scala/org/apache/spark/sql/connector/AlterTableTests.scala:
##
@@ -328,7 +336,7 @@ trait AlterTableTests extends SharedSparkSession with
Query
wangyum commented on code in PR #47998:
URL: https://github.com/apache/spark/pull/47998#discussion_r1976298867
##
sql/hive/src/test/scala/org/apache/spark/sql/hive/TestHiveSuite.scala:
##
@@ -47,4 +47,19 @@ class TestHiveSuite extends TestHiveSingleton with
SQLTestUtils {
te
pan3793 commented on code in PR #50113:
URL: https://github.com/apache/spark/pull/50113#discussion_r1976354240
##
project/SparkBuild.scala:
##
@@ -1002,10 +1002,9 @@ object KubernetesIntegrationTests {
if (excludeTags.exists(_.equalsIgnoreCase("r"))) {
rDocke
sfc-gh-dyadav opened a new pull request, #50120:
URL: https://github.com/apache/spark/pull/50120
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### H
aokolnychyi closed pull request #50044: [SPARK-51290][SQL] Enable filling
default values in DSv2 writes
URL: https://github.com/apache/spark/pull/50044
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
github-actions[bot] commented on PR #48871:
URL: https://github.com/apache/spark/pull/48871#issuecomment-2691768545
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #48108:
URL: https://github.com/apache/spark/pull/48108#issuecomment-2691768570
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
attilapiros opened a new pull request, #50122:
URL: https://github.com/apache/spark/pull/50122
Thanks for @yorksity who reported this error and even provided a PR for it.
This solution very different from https://github.com/apache/spark/pull/40883
as `BlockManagerMasterEndpoint#getLocati
chenhao-db opened a new pull request, #50121:
URL: https://github.com/apache/spark/pull/50121
### What changes were proposed in this pull request?
There is a bug in the initial optimizer rule that the `output` of the
relation will be rebuilt based on the schema of the `HadoopFsRelatio
vladimirg-db commented on PR #50118:
URL: https://github.com/apache/spark/pull/50118#issuecomment-2691659971
@cloud-fan Wenchen, can you please take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
garlandz-db commented on PR #50102:
URL: https://github.com/apache/spark/pull/50102#issuecomment-2690060116
@the-sakthi do you know where i can write them? im not seeing a relevant
test suite that already includes tests for analyze plan handler
--
This is an automated message from the Apa
HeartSaVioR commented on PR #50110:
URL: https://github.com/apache/spark/pull/50110#issuecomment-2691623600
Thanks! Merging to master/4.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
HeartSaVioR closed pull request #50110: [SPARK-51351][SS] Do not materialize
the output in Python worker for TWS
URL: https://github.com/apache/spark/pull/50110
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abov
harshmotw-db commented on code in PR #50108:
URL: https://github.com/apache/spark/pull/50108#discussion_r1975885008
##
sql/core/src/test/scala/org/apache/spark/sql/QueryTest.scala:
##
@@ -326,7 +326,13 @@ object QueryTest extends Assertions {
// For binary arrays, we conver
aokolnychyi commented on code in PR #50044:
URL: https://github.com/apache/spark/pull/50044#discussion_r1976098471
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -3534,7 +3534,8 @@ class Analyzer(override val catalogManager:
CatalogM
ericm-db opened a new pull request, #50119:
URL: https://github.com/apache/spark/pull/50119
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How wa
zecookiez opened a new pull request, #50123:
URL: https://github.com/apache/spark/pull/50123
### What changes were proposed in this pull request?
SPARK-51358
This PR adds detection logic + logging to detect delays in snapshot uploads
across all state store instances
ahshahid commented on PR #50029:
URL: https://github.com/apache/spark/pull/50029#issuecomment-2691810968
@mridulm @squito ,
I am unsure as to what you mean by marking the RDD inDeterministic, without
modifying the RDD code
1) There is no concrete field in the RDD which marks it in
hvanhovell commented on PR #50023:
URL: https://github.com/apache/spark/pull/50023#issuecomment-2691797769
Merging to master/4.0. Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the speci
hvanhovell commented on PR #50023:
URL: https://github.com/apache/spark/pull/50023#issuecomment-2691389433
@chris-twiner can you fix the style issue?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
wangyum commented on code in PR #47998:
URL: https://github.com/apache/spark/pull/47998#discussion_r1976206065
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala:
##
@@ -415,8 +415,11 @@ private[client] class Shim_v2_0 extends Shim with Logging {
t
ueshin commented on code in PR #50099:
URL: https://github.com/apache/spark/pull/50099#discussion_r1976207186
##
python/pyspark/sql/pandas/serializers.py:
##
@@ -175,6 +178,16 @@ def wrap_and_init_stream():
return super(ArrowStreamUDFSerializer,
self).dump_stream(wrap_
wangyum commented on code in PR #47998:
URL: https://github.com/apache/spark/pull/47998#discussion_r1976217724
##
sql/hive/src/test/scala/org/apache/spark/sql/hive/TestHiveSuite.scala:
##
@@ -47,4 +47,19 @@ class TestHiveSuite extends TestHiveSingleton with
SQLTestUtils {
te
pan3793 opened a new pull request, #50113:
URL: https://github.com/apache/spark/pull/50113
### What changes were proposed in this pull request?
As title.
### Why are the changes needed?
When I follow the dev docs to run K8s IT using sbt, and set
`spark.kubernetes
yaooqinn opened a new pull request, #50114:
URL: https://github.com/apache/spark/pull/50114
### What changes were proposed in this pull request?
This PR enables retry for dyn/closer.lua for mvn before falling back to
archive.a.o.
Before this PR, we used `curl` w/o retry to down
50 matches
Mail list logo