ns$.lit(functions.scala:96)
at org.apache.spark.sql.Column.$less(Column.scala:384)
at org.apache.spark.sql.Column.lt(Column.scala:399)
at Main.main(Main.java:38)
How should I do to filter the datetime in dataset filter please?
1427357...@qq.com
Hi yucai,
It works well now.
Thanks.
1427357...@qq.com
From: Yu, Yucai
Date: 2018-04-11 16:01
To: 1427357...@qq.com; spark?users
Subject: Re: how to use the sql join in java please
Do you really want to do a cartesian product on those two tables?
If yes, you can set
pache.spark.sql.Dataset.showString(Dataset.scala:254)
at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
The table A and B don't have same column.
What can I do please?
QQ GROUP:296020884
1427357...@qq.com
Hi ,
I checked the code.
It seems it is hard to change the code.
Current code, string + int is translated to double + double.
If I change the the string + int to string + sting, it will incompatible whit
old version.
Does anyone have better idea about this issue please?
1427357...@qq.com
Hi,
Using concat is one of the way.
But the + is more intuitive and easy to understand.
1427357...@qq.com
From: Shmuel Blitz
Date: 2018-03-26 15:31
To: 1427357...@qq.com
CC: spark?users; dev
Subject: Re: the issue about the + in column,can we support the string please?
Hi,
you can get the
ctx, ev,
(eval1, eval2) => s"(${ctx.javaType(dataType)})($eval1 $symbol $eval2)")
case CalendarIntervalType =>
defineCodeGen(ctx, ev, (eval1, eval2) => s"$eval1.add($eval2)")
case _ =>
defineCodeGen(ctx, ev, (eval1, eval2) => s"$eval1 $symbol $e
the samplePointsPerPartitionHint.
My issue is :
what is the samplePointsPerPartitionHint used for please?
If I set samplePointsPerPartitionHint as 100 or 20,what will happed please?
Thanks.
Robin Shao
1427357...@qq.com