Hi, is there anyone who use graphx in production? What maximum size of
graphs did you process by spark and what cluster are you use for it?
i tried calculate pagerank for 1 Gb edges LJ - dataset for
LiveJournalPageRank from spark examples and i faced with large volume
shuffles produced by spark wh
Hi All!
I'm using Spark 1.6.1 and I'm trying to transform my DStream as follows:
myStream.transorm { rdd =>
val sqlContext = SQLContext.getOrCreate(rdd.sparkContext)
import sqlContext.implicits._
val j = rdd.toDS()
j.map {
case a => Some(...)
case _ =
Hello everyone!
I try to get data from DB2 table which columns have names with
non-ascii (cyrillic) symbols, but I get from JDBC-driver error with
"SQLCODE=-206" (object-name IS NOT VALID IN THE CONTEXT WHERE IT IS
USED) and SQLERRMC consists of name of this column and the added parts
";N*.N*" lik