Re: [Spark SQL, intermediate+] possible bug or weird behavior of insertInto

2021-03-04 Thread Oldrich Vlasic
Thanks for reply! Is there something to be done, setting a config property for 
example? I'd like to prevent users (mainly data scientists) from falling victim 
to this.

From: Russell Spitzer 
Sent: Wednesday, March 3, 2021 3:31 PM
To: Sean Owen 
Cc: Oldrich Vlasic ; user 
; Ondřej Havlíček 
Subject: Re: [Spark SQL, intermediate+] possible bug or weird behavior of 
insertInto

Yep this is the behavior for Insert Into, using the other write apis does 
schema matching I believe.

On Mar 3, 2021, at 8:29 AM, Sean Owen 
mailto:sro...@gmail.com>> wrote:

I don't have any good answer here, but, I seem to recall that this is because 
of SQL semantics, which follows column ordering not naming when performing 
operations like this. It may well be as intended.

On Tue, Mar 2, 2021 at 6:10 AM Oldrich Vlasic 
mailto:oldrich.vla...@datasentics.com>> wrote:
Hi,

I have encountered a weird and potentially dangerous behaviour of Spark 
concerning
partial overwrites of partitioned data. Not sure if this is a bug or just 
abstraction
leak. I have checked Spark section of Stack Overflow and haven't found any 
relevant
questions or answers.

Full minimal working example provided as attachment. Tested on Databricks 
runtime 7.3 LTS
ML (Spark 3.0.1). Short summary:

Write dataframe using partitioning by a column using saveAsTable. Filter out 
part of the
dataframe, change some values (simulates new increment of data) and write again,
overwriting a subset of partitions using insertInto. This operation will either 
fail on
schema mismatch or cause data corruption.

Reason: on the first write, the ordering of the columns is changed (partition 
column is
placed at the end). On the second write this is not taken into consideration 
and Spark
tries to insert values into the columns based on their order and not on their 
name. If
they have different types this will fail. If not, values will be written to 
incorrect
columns causing data corruption.

My question: is this a bug or intended behaviour? Can something be done about 
it to prevent
it? This issue can be avoided by doing a select with schema loaded from the 
target table.
However, when user is not aware this could cause hard to track down errors in 
data.

Best regards,
Oldřich Vlašic

-
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org



Re: [Spark SQL, intermediate+] possible bug or weird behavior of insertInto

2021-03-04 Thread Jeff Evans
Why not perform a df.select(...) before the final write to ensure a
consistent ordering.

On Thu, Mar 4, 2021, 7:39 AM Oldrich Vlasic 
wrote:

> Thanks for reply! Is there something to be done, setting a config property
> for example? I'd like to prevent users (mainly data scientists) from
> falling victim to this.
> --
> *From:* Russell Spitzer 
> *Sent:* Wednesday, March 3, 2021 3:31 PM
> *To:* Sean Owen 
> *Cc:* Oldrich Vlasic ; user <
> user@spark.apache.org>; Ondřej Havlíček 
> *Subject:* Re: [Spark SQL, intermediate+] possible bug or weird behavior
> of insertInto
>
> Yep this is the behavior for Insert Into, using the other write apis does
> schema matching I believe.
>
> On Mar 3, 2021, at 8:29 AM, Sean Owen  wrote:
>
> I don't have any good answer here, but, I seem to recall that this is
> because of SQL semantics, which follows column ordering not naming when
> performing operations like this. It may well be as intended.
>
> On Tue, Mar 2, 2021 at 6:10 AM Oldrich Vlasic <
> oldrich.vla...@datasentics.com> wrote:
>
> Hi,
>
> I have encountered a weird and potentially dangerous behaviour of Spark
> concerning
> partial overwrites of partitioned data. Not sure if this is a bug or just
> abstraction
> leak. I have checked Spark section of Stack Overflow and haven't found any
> relevant
> questions or answers.
>
> Full minimal working example provided as attachment. Tested on Databricks
> runtime 7.3 LTS
> ML (Spark 3.0.1). Short summary:
>
> Write dataframe using partitioning by a column using saveAsTable. Filter
> out part of the
> dataframe, change some values (simulates new increment of data) and write
> again,
> overwriting a subset of partitions using insertInto. This operation will
> either fail on
> schema mismatch or cause data corruption.
>
> Reason: on the first write, the ordering of the columns is changed
> (partition column is
> placed at the end). On the second write this is not taken into
> consideration and Spark
> tries to insert values into the columns based on their order and not on
> their name. If
> they have different types this will fail. If not, values will be written
> to incorrect
> columns causing data corruption.
>
> My question: is this a bug or intended behaviour? Can something be done
> about it to prevent
> it? This issue can be avoided by doing a select with schema loaded from
> the target table.
> However, when user is not aware this could cause hard to track down errors
> in data.
>
> Best regards,
> Oldřich Vlašic
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
>


Possible upgrade path from Spark 3.1.1-RC2 to Spark 3.1.1 GA

2021-03-04 Thread Mich Talebzadeh
Hi,

Is there any direct upgrade path from 3.1.1 RC-2 to 3.1.1 GA.

If there is, will that involve replacing the Spark binaries?

thanks,

Mich



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Re: Possible upgrade path from Spark 3.1.1-RC2 to Spark 3.1.1 GA

2021-03-04 Thread Sean Owen
I think you're still asking about GCP and Dataproc, and that's really
nothing to do with Spark itself.
Whatever issues you are having concern Dataproc and how it's run and
possibly customizations in Dataproc.
3.1.1-RC2 is not a release, but, also nothing meaningfully changed between
it and the final 3.1.1 release. There is no need for any change to work
with 3.1.1.


On Thu, Mar 4, 2021 at 8:16 AM Mich Talebzadeh 
wrote:

> Hi,
>
> Is there any direct upgrade path from 3.1.1 RC-2 to 3.1.1 GA.
>
> If there is, will that involve replacing the Spark binaries?
>
> thanks,
>
> Mich
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>


Re: Possible upgrade path from Spark 3.1.1-RC2 to Spark 3.1.1 GA

2021-03-04 Thread Mich Talebzadeh
Ok, thanks.



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 4 Mar 2021 at 14:39, Sean Owen  wrote:

> I think you're still asking about GCP and Dataproc, and that's really
> nothing to do with Spark itself.
> Whatever issues you are having concern Dataproc and how it's run and
> possibly customizations in Dataproc.
> 3.1.1-RC2 is not a release, but, also nothing meaningfully changed between
> it and the final 3.1.1 release. There is no need for any change to work
> with 3.1.1.
>
>
> On Thu, Mar 4, 2021 at 8:16 AM Mich Talebzadeh 
> wrote:
>
>> Hi,
>>
>> Is there any direct upgrade path from 3.1.1 RC-2 to 3.1.1 GA.
>>
>> If there is, will that involve replacing the Spark binaries?
>>
>> thanks,
>>
>> Mich
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>


RE: Spark Version 3.0.1 Gui Display Query

2021-03-04 Thread Ranju Jain
Hi Attila,

I checked the Section < 
https://spark.apache.org/docs/latest/monitoring.html#web-interfaces>  and Web 
UI Page

What document is saying that if I want to view information only for the 
duration of the application, then I do not need to 
generate the event logs and do not need to set spark.eventLog.enabled=true and 
spark.eventLog.dir= .

But If I want to see this info after the application completes , then I should 
persist the logs.

My Requirement is to monitor the Executor Tab only during the Job run and not 
after. How can I see only during application run.

Regards
Ranju

-Original Message-
From: Attila Zsolt Piros  
Sent: Wednesday, March 3, 2021 10:37 PM
To: user@spark.apache.org
Subject: RE: Spark Version 3.0.1 Gui Display Query

Hi Ranju!

The UI is built up from events. This is why history server able to show the 
state of the a finished app as those events are replayed to build a state, for 
details you can check  web UI page and the following section too < 
https://spark.apache.org/docs/latest/monitoring.html#web-interfaces>  .

So you should share/look into the event log.

Regards,
Attila




--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



RE: Spark Version 3.0.1 Gui Display Query

2021-03-04 Thread Attila Zsolt Piros
Hi Ranju!

I meant the event log would be very helpful for analyzing the problem at
your side. 

The three logs together (driver, executors, event) is the best from the same
run of course.
 
I know you want check the executors tab during the job is running. And for
this you do not need to eventlog. But the event log is still useful for
finding out what happened.

Regards,
Attila




--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: [Spark SQL, intermediate+] possible bug or weird behavior of insertInto

2021-03-04 Thread Oldrich Vlasic
That certainly is a solution if you know about the issue and we've used it in 
the end.

I'm trying to find out if there is a solution that would prevent users who 
don't know about it from accidentally corrupting data. Something like "enable 
strict schema matching".

From: Jeff Evans 
Sent: Thursday, March 4, 2021 2:55 PM
To: Oldrich Vlasic 
Cc: Russell Spitzer ; Sean Owen ; 
user ; Ondřej Havlíček 
Subject: Re: [Spark SQL, intermediate+] possible bug or weird behavior of 
insertInto

Why not perform a df.select(...) before the final write to ensure a consistent 
ordering.

On Thu, Mar 4, 2021, 7:39 AM Oldrich Vlasic 
mailto:oldrich.vla...@datasentics.com>> wrote:
Thanks for reply! Is there something to be done, setting a config property for 
example? I'd like to prevent users (mainly data scientists) from falling victim 
to this.

From: Russell Spitzer 
mailto:russell.spit...@gmail.com>>
Sent: Wednesday, March 3, 2021 3:31 PM
To: Sean Owen mailto:sro...@gmail.com>>
Cc: Oldrich Vlasic 
mailto:oldrich.vla...@datasentics.com>>; user 
mailto:user@spark.apache.org>>; Ondřej Havlíček 
mailto:ondrej.havli...@datasentics.com>>
Subject: Re: [Spark SQL, intermediate+] possible bug or weird behavior of 
insertInto

Yep this is the behavior for Insert Into, using the other write apis does 
schema matching I believe.

On Mar 3, 2021, at 8:29 AM, Sean Owen 
mailto:sro...@gmail.com>> wrote:

I don't have any good answer here, but, I seem to recall that this is because 
of SQL semantics, which follows column ordering not naming when performing 
operations like this. It may well be as intended.

On Tue, Mar 2, 2021 at 6:10 AM Oldrich Vlasic 
mailto:oldrich.vla...@datasentics.com>> wrote:
Hi,

I have encountered a weird and potentially dangerous behaviour of Spark 
concerning
partial overwrites of partitioned data. Not sure if this is a bug or just 
abstraction
leak. I have checked Spark section of Stack Overflow and haven't found any 
relevant
questions or answers.

Full minimal working example provided as attachment. Tested on Databricks 
runtime 7.3 LTS
ML (Spark 3.0.1). Short summary:

Write dataframe using partitioning by a column using saveAsTable. Filter out 
part of the
dataframe, change some values (simulates new increment of data) and write again,
overwriting a subset of partitions using insertInto. This operation will either 
fail on
schema mismatch or cause data corruption.

Reason: on the first write, the ordering of the columns is changed (partition 
column is
placed at the end). On the second write this is not taken into consideration 
and Spark
tries to insert values into the columns based on their order and not on their 
name. If
they have different types this will fail. If not, values will be written to 
incorrect
columns causing data corruption.

My question: is this a bug or intended behaviour? Can something be done about 
it to prevent
it? This issue can be avoided by doing a select with schema loaded from the 
target table.
However, when user is not aware this could cause hard to track down errors in 
data.

Best regards,
Oldřich Vlašic

-
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org



Re: Spark Version 3.0.1 Gui Display Query

2021-03-04 Thread Mich Talebzadeh
Well I cannot recall in Spark 3.0.1. However, this looks fine in Spark
3.1.1 (recent release)

See attached the image from the executor tabs




LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 2 Mar 2021 at 05:35, Ranju Jain 
wrote:

> Hi ,
>
>
>
> I started using Spark 3.0.1 version recently and noticed the Executors Tab
> on Spark GUI appears as blank.
>
> Please suggest what could be the reason of this type of display?
>
>
>
> Regards
>
> Ranju
>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

RE: Spark Version 3.0.1 Gui Display Query

2021-03-04 Thread Ranju Jain
Hi Attila,

Ok , I understood. I will switch on event logs .

Regards
Ranju

-Original Message-
From: Attila Zsolt Piros  
Sent: Thursday, March 4, 2021 11:38 PM
To: user@spark.apache.org
Subject: RE: Spark Version 3.0.1 Gui Display Query

Hi Ranju!

I meant the event log would be very helpful for analyzing the problem at your 
side. 

The three logs together (driver, executors, event) is the best from the same 
run of course.
 
I know you want check the executors tab during the job is running. And for this 
you do not need to eventlog. But the event log is still useful for finding out 
what happened.

Regards,
Attila




--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark Version 3.0.1 Gui Display Query

2021-03-04 Thread Kapil Garg
Hi Ranju,
The screenshots and logs you shared are from spark driver and executor. I
meant for you to check the web page logs in chrome console. There might be
some error logs indicating why UI is unable to fetch the information.

I have faced a similar problem when I was accessing spark UI via a proxy
and the proxy was having problems resolving the backend URL and data was
not visible in executors tab.

Just check the chrome console logs once and if you find any error logs then
do share here for others to look at.

On Fri, Mar 5, 2021 at 9:35 AM Ranju Jain 
wrote:

> Hi Attila,
>
> Ok , I understood. I will switch on event logs .
>
> Regards
> Ranju
>
> -Original Message-
> From: Attila Zsolt Piros 
> Sent: Thursday, March 4, 2021 11:38 PM
> To: user@spark.apache.org
> Subject: RE: Spark Version 3.0.1 Gui Display Query
>
> Hi Ranju!
>
> I meant the event log would be very helpful for analyzing the problem at
> your side.
>
> The three logs together (driver, executors, event) is the best from the
> same run of course.
>
> I know you want check the executors tab during the job is running. And for
> this you do not need to eventlog. But the event log is still useful for
> finding out what happened.
>
> Regards,
> Attila
>
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

-- 
Regards
Kapil Garg

-- 


*-*

*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*

 

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*

 

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*

_-_


????

2021-03-04 Thread ????????????????????