You can workaround it by leveraging expr, e.g., expr("unix_micros(col)")
for now.
Should better have Scala binding first before we have Python one FWIW,
On Sat, 15 Oct 2022 at 06:19, Martin wrote:
> Hi everyone,
>
> In *Spark SQL* there are several timestamp related functions
>
>- unix_micro
Glad to hear it!
On Sun, Oct 16, 2022 at 2:37 PM Mohammad Abdollahzade Arani <
mamadazar...@gmail.com> wrote:
> Hi Qian,
> Thanks for the reply and I'm So sorry for the late reply.
> I found the answer. My mistake was token conversion. I had to decode
> base64 the service accounts token and cert
Spark doesn't offer a native graph database like Neo4j does since GraphX
is still using the RDD tabular data structure. Spark doesn't have a GQL
or Cypher query engine either, but uses Google's Pregal API for graph
processing. Don't see any prospect that Spark is going to implement any
types