fine. Any idea of what I can do?
*Alessandro Liparoti*
fine. Any idea of what I can do?
*Alessandro Liparoti*
er, so probably this code is not effective.
The other option would be to try-catch all public methods of my API, log an
exception when it happens and then throw it. But I think this is not
optimal.
Do you have any suggestion?
*Alessandro Liparoti*
e ivy log shows when starting up the shell)
Do you have any idea of what the problem might be? Does maybe py4j cache
something so that when upgrading the classes we get this error?
*Alessandro Liparoti*
solve this? It is also very hard to
debug since I couldn't find a pattern to reproduce it. It happens on every
release that changes a class name but not for everyone running the job
(that's why caching looked like a good hint to me).
Thanks,
*Alessandro Liparoti*
f
these 2 apparently different operations?
Thanks,
*Alessandro Liparoti*
s at runtime.
Which approach do you suggest me to use? Is there a syntactical check in
the spark codebase that I can use for my use case?
*Alessandro Liparoti*
how it works?
Thanks,
*Alessandro Liparoti*
doing something wrong.
Thanks,
*Alessandro Liparoti*
Good morning,
I am trying to see how this bug affects the write in spark 2.2.0, but I
cannot reproduce it. Is it ok then using the code
df.write.mode(SaveMode.Overwrite).insertInto("table_name")
?
Thank you,
*Alessandro Liparoti*
10 matches
Mail list logo