Re: [Feature Request] make unix_micros() and unix_millis() available in PySpark (pyspark.sql.functions)

2022-10-16 Thread Hyukjin Kwon
You can workaround it by leveraging expr, e.g., expr("unix_micros(col)") for now. Should better have Scala binding first before we have Python one FWIW, On Sat, 15 Oct 2022 at 06:19, Martin wrote: > Hi everyone, > > In *Spark SQL* there are several timestamp related functions > >- unix_micro

Re: spark on kubernetes

2022-10-16 Thread Qian Sun
Glad to hear it! On Sun, Oct 16, 2022 at 2:37 PM Mohammad Abdollahzade Arani < mamadazar...@gmail.com> wrote: > Hi Qian, > Thanks for the reply and I'm So sorry for the late reply. > I found the answer. My mistake was token conversion. I had to decode > base64 the service accounts token and cert

Re: How to use neo4j cypher/opencypher to query spark RDD/graphdb

2022-10-16 Thread Artemis User
Spark doesn't offer a native graph database like Neo4j does since GraphX is still using the RDD tabular data structure.  Spark doesn't have a GQL or Cypher query engine either, but uses Google's Pregal API for graph processing.  Don't see any prospect that Spark is going to implement any types