join condition in the plan, so the underlay Spark could work well. But
sometimes this optimization will be skipped, or users specify the Spark
without Calcite layer. The query can not run.
So should Spark support this statement like Calcite/PostgreSQL/Presto?
Thanks,
Lantao
Lantao Jin 于2
In PostgreSQL and Presto, the below query works well
sql> create table t1 (id int);
sql> create table t2 (id int);
sql> select * from t1 join t2 on t1.id = floor(random() * 9) + t2.id;
But it throws "Error in query: nondeterministic expressions are only
allowed in Project, Filter, Aggregate or Win
Lantao Jin shared an issue with you
SPIP: Support Spark Materialized View
> SPIP: Support Spark Materialized View
> -
>
> Key: SPARK-29038
> URL: https://issues.
Lantao Jin shared an issue with you
Hi all,
Do you think is it a bug?
Should we keep the current behavior still?
> Ignore to load default properties file is not a good choice from the
> perspective of
Lantao Jin shared an issue with you
> Spark-sql do not support for void column datatype of view
> -
>
> Key: SPARK-20680
> URL: https://issues.