Yeah.. let's stick to Python 3 in general ..
I plan to drop Python 2 completely right after Spark 3.0 release.
The exception you face .. seems like run_cmd now produces unicode instead
of bytes in Python 2 with the merge script. Later, seems this unicode is
attempted to be casted to bytes implicit
I remember merging PRs with non-ascii chars in the past...
Anyway, for these scripts, might be easier to just use python3 for
everything, instead of trying to keep them working on two different
versions.
On Fri, Nov 8, 2019 at 10:28 AM Sean Owen wrote:
>
> Ah OK. I think it's the same type of is
Ah OK. I think it's the same type of issue that the last change
actually was trying to fix for Python 2. Here it seems like the author
name might have non-ASCII chars?
I don't immediately know enough to know how to resolve that for Python
2. Something with how raw_input works, I take it. You could
Something related to non-ASCII characters. Worked fine with python 3.
git branch -D PR_TOOL_MERGE_PR_26426_MASTER
Traceback (most recent call last):
File "./dev/merge_spark_pr.py", line 577, in
main()
File "./dev/merge_spark_pr.py", line 552, in main
merge_hash = merge_pr(pr_num, targ
Hm, the last change was on Oct 1, and should have actually helped it
still work with Python 2:
https://github.com/apache/spark/commit/2ec3265ae76fc1e136e44c240c476ce572b679df#diff-c321b6c82ebb21d8fd225abea9b7b74c
Hasn't otherwise changed in a while. What's the error?
On Fri, Nov 8, 2019 at 11:37
Hey all,
Something broke that script when running with python 2.
I know we want to deprecate python 2, but in that case, scripts should
at least be changed to use "python3" in the shebang line...
--
Marcelo
-
To unsubscribe e-