Dear All,
Any update regarding Graph Streaming, I want to update, i.e., add vertices
and edges after creation of graph.
Any suggestions or recommendations to do that.
Thanks,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Incrementally-add-remove-vertic
Hi,
I am wondering if there is any current work going on optimizations of
GraphX?
I am aware of GraphFrames that is built on Data frame. However, is there any
plane to build GraphX's version on newer Spark APIs, i.e., Datasets or Spark
2.0?
Furthermore, Is there any plan to incorporate Graph St
Hi,
Just to continue with the question.
I need to find the edges of one particular vertex. However,
(collectNeighbors/collectNeighborIds) provides the neighbors/neighborids for
all the vertices of the graph.
Any help in this regard will be highly appreciated.
Thanks,
--
View this message in co
Hi all!
I am trying to install spark on my standalone machine. I am able to run the
master but when i try to run the slaves it gives me following error. Any
help in this regard will highly be appreciated.
_
localhost: failed to launch org
Hi All,
I am trying to build spark 1.3.0 on Ubuntu 14.04 Stand alone machine. I am
using "sbt/sbt assembly" command to build it. However, this command works
pretty fine with spark version "1.1.0" but for Spark 1.3 it gives following
error.
Any help or suggestions to resolve this problem will highl
Hi All,
I am trying to build spark 1.3.O on standalone Ubuntu 14.04. I am using the
sbt command i.e. "sbt/sbt assembly" to build it. This command works pretty
good with spark version 1.1 however, it gives following error with spark
1.3.0. Any help or suggestions to resolve this would highly be appr
val locations = filelines.map(line => line.split("\t")).map(t =>
(t(5).toLong, (t(2).toDouble, t(3).toDouble))).distinct().collect()
val cartesienProduct=locations.cartesian(locations).map(t=>
Edge(t._1._1,t._2._1,distanceAmongPoints(t._1._2._1,t._1._2._2,t._2._2._1,t._2._2._2)))
Code executes p
I have a big data file, i aim to create index on the data. I want to
partition the data based on user defined function in Spark-GraphX (Scala).
Further i want to keep track the node on which a particular data partition
is send and being processed so i could fetch the required data by accessing
the
A very basic but strange problem:
On running master i am getting following error.
My java path is proper, however spark-class file is getting error because
here the in the string "bin/java" is duplicated. Can any body explain why it
is getting this .
Error:
/bin/spark-class: line 190: exec:
/u
Hi,
I aim to do custom partitioning on a text file. I first convert it into
pairRDD and then try to use my custom partitioner. However, somehow it is
not working. My code snippet is given below.
val file=sc.textFile(filePath)
val locLines=file.map(line => line.split("\t")).map(line=>
((line(2).to
Hi,
How does GraphX stores the routing table? Is it stored on the master node or
chunks of the routing table is send to each partition that maintains the
record of vertices and edges at that node?
If only customized edge partitioning is performed will the corresponding
vertices be sent to same pa
Hi All,
I have distributed my RDD into say 10 nodes. I want to fetch the data that
resides on a particular node say "node 5". How i can achieve this?
I have tried mapPartitionWithIndex function to filter the data of that
corresponding node, however it is pretty expensive.
Any efficient way to do
I want to know the internal traversal of Graph by GraphX. Is it vertex and
edges based traversal or sequential traversal of RDDS? For example given a
vertex of graph, i want to fetch only of its neighbors Not the neighbors of
all the vertices ? How GraphX will traverse the graph in this case.
Than
13 matches
Mail list logo