HI,
Any one have idea about this.
When i use graph.vertices.collect() in the console(spark console) getting
limited vertices data, as i have million records
res34: Array[(org.apache.spark.graphx.VertexId, (Array[Double],
Array[Double], Double, Double))] =
Array((8501952,(Array(-1.0720014023085627
Hi,
Where can i find the the ALS recommendation algorithm for large data set?
Please feel to share your ideas/algorithms/logic to build recommendation
engine by using spark graphx
Thanks in advance.
Thanks,
Balaji
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nab
Hi,
Is possible Bipartite projection with Graphx
Rdd1
#id name
1 x1
2 x2
3 x3
4 x4
5 x5
6 x6
7 x7
8 x8
Rdd2
#id name
10001 y1
10002 y2
10003 y3
10004 y4
10005 y5
10006 y6
EdgeList
#src id Dest id
1 10001
1 10002
2
Hi All,
I would like to know about spark graphx execution/processing with
database.Yes, i understand spark graphx is in-memory processing but some
extent we can manage querying but would like to do much more complex query
or processing.Please suggest me the usecase or steps for the same.
--
Vi
Hi Thanks for reply.
Here is my code:
class BusStopNode(val name: String,val mode:String,val maxpasengers :Int)
extends Serializable
case class busstop(override val name: String,override val mode:String,val
shelterId: String, override val maxpasengers :Int) extends
BusStopNode(name,mode,maxpasenge
Hi,
I would like to know how to do graphx triplet comparison in scala.
Example there are two triplets;
val triplet1 = mainGraph.triplet.filter(condition1)
val triplet2 = mainGraph.triplet.filter(condition2)
now i want to do compare triplet1 & triplet2 with condition3
--
View this message i
HI,
I am getting null pointer exception when i am executing the triplet loop
inside another triplet loop
works fine below :
for (mainTriplet <- mainGraph.triplets) {
println(mainTriplet.dstAttr.name)
}
works fine below :
for (subTriplet <- subGrapgh.triplets) {
println(subTriplet .dstA