Moving back the discussion to this thread. The current argument is how to
avoid extra RPC calls for catalogs supporting both table and view. There
are several options:
1. ignore it as extra PRC calls are cheap compared to the query execution
2. have a per session cache for loaded table/view
3. have
Exciting & look forward to this!
(And a late +1 vote that probably won't be counted)
On Mon, Nov 09, 2020 at 2:37 PM, Allison Wang < allison.w...@databricks.com >
wrote:
>
>
>
> Thanks everyone for voting! With 11 +1s and no -1s, this vote passes.
>
>
>
> +1s:
> Mridul Muralidharan
> Ange
Thanks everyone for voting! With 11 +1s and no -1s, this vote passes.
+1s:
Mridul Muralidharan
Angers Zhu
Chandni Singh
Eve Liao
Matei Zaharia
Kalyan
Wenchen Fan
Gengliang Wang
Xiao Li
Takeshi Yamamuro
Herman van Hovell
Thanks,
Allison
--
Sent from: http://apache-spark-developers-list.1001551.
Hello,
When I run PySpark to save to a Postgresql database, I run into an error
where uuid insert statements are not constructed properly. There are a lot
of different questions on stackoverflow about the same issue.
https://stackoverflow.com/questions/64671739/pyspark-nullable-uuid-type-uuid-bu
On Mon, 12 Oct 2020 at 19:06, Sean Owen wrote:
> I don't have a good answer, Steve may know more, but from looking at
> dependency:tree, it looks mostly like it's hadoop-common that's at issue.
> Without -Phive it remains 'provided' in the assembly/ module, but -Phive
> causes it to come back in.
+1
On Mon, Nov 9, 2020 at 2:06 AM Takeshi Yamamuro
wrote:
> +1
>
> On Thu, Nov 5, 2020 at 3:41 AM Xinyi Yu wrote:
>
>> Hi all,
>>
>> We had the discussion of SPIP: Standardize Spark Exception Messages at
>>
>> http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-SPIP-Standardize-Sp