Hi,
I am trying to cluster words of some articles. I used TFIDF and Word2Vec in
Spark to get the vector for each word and I used KMeans to cluster the
words. Now, is there any way to get back the words from the vectors? I want
to know what words are there in each cluster.
I am aware that TFIDF does
Hi,
I am running a Spark job. I get the output correctly but when I see the
logs file I see the following:
AbstractLifeCycle: FAILED.: java.net.BindException: Address already in
use...
What could be the reason for this?
Thank You
I had the Spark Shell running through out. Is it because of that?
On Tue, Jan 20, 2015 at 9:47 AM, Ted Yu wrote:
> Was there another instance of Spark running on the same machine ?
>
> Can you pastebin the full stack trace ?
>
> Cheers
>
> On Mon, Jan 19, 2015 at 8:11 PM,
: stopped
o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
15/01/17 14:33:39 INFO ContextHandler: stopped
o.e.j.s.ServletContextHandler{/,null}
15/01/17 14:33:39 INFO ContextHandler: stopped
o.e.j.s.ServletContextHandler{/static,null}
..
On Tue, Jan 20, 2015 at 9:52 AM, Deep Pradhan
wrote
gt; On Mon, Jan 19, 2015 at 8:33 PM, Deep Pradhan
> wrote:
>
>> Hi Ted,
>> When I am running the same job with small data, I am able to run. But
>> when I run it with relatively bigger set of data, it is giving me
>> OutOfMemoryError: GC overhead limit exceeded.
>
Hi,
Is there a better programming construct than while loop in Spark?
Thank You
Hi All,
Gordon SC has Spark installed in it. Has anyone tried to run Spark jobs on
Gordon?
Thank You
Hi,
Is there any better operation than Union. I am using union and the cluster
is getting stuck with a large data set.
Thank you
The cluster hangs.
On Mon, Feb 2, 2015 at 11:25 AM, Jerry Lam wrote:
> Hi Deep,
>
> what do you mean by stuck?
>
> Jerry
>
>
> On Mon, Feb 2, 2015 at 12:44 AM, Deep Pradhan
> wrote:
>
>> Hi,
>> Is there any better operation than Union. I am using unio
, 2015 at 11:53 AM, Jerry Lam wrote:
> Hi Deep,
>
> How do you know the cluster is not responsive because of "Union"?
> Did you check the spark web console?
>
> Best Regards,
>
> Jerry
>
>
> On Mon, Feb 2, 2015 at 1:21 AM, Deep Pradhan
> wrote:
>
>
wrote:
> Hi Deep,
>
> What is your configuration and what is the size of the 2 data sets?
>
> Thanks
> Arush
>
> On Mon, Feb 2, 2015 at 11:56 AM, Deep Pradhan
> wrote:
>
>> I did not check the console because once the job starts I cannot run
>> anything els
Hi,
Can Spark Job Server be used for profiling Spark jobs?
I read somewhere about Gatling. Can that be used to profile Spark jobs?
On Fri, Feb 6, 2015 at 10:27 AM, Kostas Sakellis
wrote:
> Which Spark Job server are you talking about?
>
> On Thu, Feb 5, 2015 at 8:28 PM, Deep Pradhan
> wrote:
>
>> Hi,
>> Can Spark Job Server
job is slow? Gatling seems to be a load generating framework so I'm not
> sure how you'd use it (i've never used it before). Spark runs on the JVM so
> you can use any JVM profiling tools like YourKit.
>
> Kostas
>
> On Thu, Feb 5, 2015 at 9:03 PM, Deep Pradhan
>
I have a single node Spark standalone cluster. Will this also work for my
cluster?
Thank You
On Fri, Feb 6, 2015 at 11:02 AM, Mark Hamstra
wrote:
>
> https://cwiki.apache.org/confluence/display/SPARK/Profiling+Spark+Applications+Using+YourKit
>
> On Thu, Feb 5, 2015 at 9:18 PM,
Hi,
When we submit a PR in Github, there are various tests that are performed
like RAT test, Scala Style Test, and beyond this many other tests which run
for more time.
Could anyone please direct me to the details of the tests that are
performed there?
Thank You
Hi,
Is the implementation of All Pairs Shortest Path on GraphX for directed
graphs or undirected graph? When I use the algorithm with dataset, it
assumes that the graph is undirected.
Has anyone come across that earlier?
Thank you
Hi,
I am using YourKit tool to profile Spark jobs that is run in my Single Node
Spark Cluster.
When I see the YourKit UI Performance Charts, the thread count always
remains at
All threads: 34
Daemon threads: 32
Here are my questions:
1. My system can run only 4 threads simultaneously, and obvious
o. Your executor probably takes as many threads as
> cores in both cases, 4.
>
>
> On Sat, Feb 7, 2015 at 10:14 AM, Deep Pradhan
> wrote:
> > Hi,
> > I am using YourKit tool to profile Spark jobs that is run in my Single
> Node
> > Spark Cluster.
> &
Hi,
I have been running some jobs in my local single node stand alone cluster.
I am varying the worker instances for the same job, and the time taken for
the job to complete increases with increase in the number of workers. I
repeated some experiments varying the number of nodes in a cluster too an
Hi,
Has some performance prediction work been done on Spark?
Thank You
takes, etc.
>
> On Sat, Feb 21, 2015 at 2:37 PM, Deep Pradhan
> wrote:
> > Hi,
> > I have been running some jobs in my local single node stand alone
> cluster. I
> > am varying the worker instances for the same job, and the time taken for
> the
> > job to comp
8:22 PM, Ted Yu wrote:
> Can you be a bit more specific ?
>
> Are you asking about performance across Spark releases ?
>
> Cheers
>
> On Sat, Feb 21, 2015 at 6:38 AM, Deep Pradhan
> wrote:
>
>> Hi,
>> Has some performance prediction work been done on Spark?
>>
>> Thank You
>>
>>
>
like, without having the 10 nodes cluster, I can know the behavior of
the application in 10 nodes cluster by having a single node with 10
workers. The time taken may vary but I am talking about the behavior. Can
we say that?
On Sat, Feb 21, 2015 at 8:21 PM, Deep Pradhan
wrote:
> Yes, I am talk
e you to pay more overhead of managing so many small
> >> tasks, for no speed up in execution time.
> >>
> >> Can you provide any more specifics though? you haven't said what
> >> you're running, what mode, how many workers, how long it takes, etc.
> >
?
>
> Bottom line, you wouldn't use multiple workers on one small standalone
> node. This isn't a good way to estimate performance on a distributed
> cluster either.
>
> On Sat, Feb 21, 2015 at 3:11 PM, Deep Pradhan
> wrote:
> > No, I just have a single node stan
in performance, right?
Thank You
On Sat, Feb 21, 2015 at 8:52 PM, Deep Pradhan
wrote:
> Yes, I have decreased the executor memory.
> But,if I have to do this, then I have to tweak around with the code
> corresponding to each configuration right?
>
> On Sat, Feb 21, 2015 at 8:4
So, if I keep the number of instances constant and increase the degree of
parallelism in steps, can I expect the performance to increase?
Thank You
On Sat, Feb 21, 2015 at 9:07 PM, Deep Pradhan
wrote:
> So, with the increase in the number of worker instances, if I also
> increase the deg
Yes, exactly.
On Sun, Feb 22, 2015 at 9:10 AM, Ognen Duzlevski
wrote:
> On Sat, Feb 21, 2015 at 8:54 AM, Deep Pradhan
> wrote:
>
>> No, I am talking about some work parallel to prediction works that are
>> done on GPUs. Like say, given the data for smaller number of nodes
close. The actual observed improvement is very algorithm-dependent,
> though; for instance, some ML algorithms become hard to scale out past a
> certain point because the increase in communication overhead outweighs the
> increase in parallelism.
>
> On Sat, Feb 21, 2015 at 8:1
the same way?
Thank You
On Sun, Feb 22, 2015 at 10:02 AM, Deep Pradhan
wrote:
> >> So increasing Executors without increasing physical resources
> If I have a 16 GB RAM system and then I allocate 1 GB for each executor,
> and give number of executors as 8, then I am increasing the
Has anyone done any work on that?
On Sun, Feb 22, 2015 at 9:57 AM, Deep Pradhan
wrote:
> Yes, exactly.
>
> On Sun, Feb 22, 2015 at 9:10 AM, Ognen Duzlevski <
> ognen.duzlev...@gmail.com> wrote:
>
>> On Sat, Feb 21, 2015 at 8:54 AM, Deep Pradhan
>> wrote:
>&g
Hi,
If I repartition my data by a factor equal to the number of worker
instances, will the performance be better or worse?
As far as I understand, the performance should be better, but in my case it
is becoming worse.
I have a single node standalone cluster, is it because of this?
Am I guaranteed t
t to the total # of task slots in the Executors.
>
> If you're running on a single node, shuffle operations become almost free
> (because there's no network movement), so don't read into any performance
> metrics you've collected to extrapolate what may happen at scale.
&
ally over subscribe this. So if you have 10 free CPU cores,
> set num_cores to 20.
>
>
> On Monday, February 23, 2015, Deep Pradhan
> wrote:
>
>> How is task slot different from # of Workers?
>>
>>
>> >> so don't read into any performance metrics you
Here, I wanted to ask a different thing though.
Let me put it this way.
What is the relationship between the performance of a Spark Job and the
number of cores in the standalone Spark single node cluster.
Thank You
On Tue, Feb 24, 2015 at 8:39 AM, Deep Pradhan
wrote:
> You m
Hi,
I have just signed up for Amazon AWS because I learnt that it provides
service for free for the first 12 months.
I want to run Spark on EC2 cluster. Will they charge me for this?
Thank You
e types of machine that you
> launched, but not on the utilisation of machine.
>
> Hope it would help.
>
> Cheers
> Gen
>
>
> On Tue, Feb 24, 2015 at 3:55 PM, Deep Pradhan
> wrote:
>
>> Hi,
>> I have just signed up for Amazon AWS because I learnt that it prov
that is at all CPU
> intensive. It's for, say, running a low-traffic web service.
>
> On Tue, Feb 24, 2015 at 2:55 PM, Deep Pradhan
> wrote:
> > Hi,
> > I have just signed up for Amazon AWS because I learnt that it provides
> > service for free for the first 12 months.
> > I want to run Spark on EC2 cluster. Will they charge me for this?
> >
> > Thank You
>
paying the ~$0.07/hour to play with an
> m3.medium, which ought to be pretty OK for basic experimentation.
>
> On Tue, Feb 24, 2015 at 3:14 PM, Deep Pradhan
> wrote:
> > Thank You Sean.
> > I was just trying to experiment with the performance of Spark
> Applications
>
ndalone mode on a
> cluster, you can find more details here:
> https://spark.apache.org/docs/latest/spark-standalone.html
>
> Cheers
> Gen
>
>
> On Tue, Feb 24, 2015 at 4:07 PM, Deep Pradhan
> wrote:
>
>> Kindly bear with my questions as I am new to this.
>>
ng purposes.
> :)
>
> Thanks
> Best Regards
>
> On Tue, Feb 24, 2015 at 8:25 PM, Deep Pradhan
> wrote:
>
>> Hi,
>> I have just signed up for Amazon AWS because I learnt that it provides
>> service for free for the first 12 months.
>> I want to run Spark on EC2 cluster. Will they charge me for this?
>>
>> Thank You
>>
>
>
Has KNN classification algorithm been implemented on MLlib?
Thank You
Regards,
Deep
What should be the expected performance of Spark Applications with the
increase in the number of nodes in a cluster, other parameters being
constant?
Thank You
Regards,
Deep
Hi,
I have a four single core machines as slaves in my cluster. I set the
spark.default.parallelism to 4 and ran SparkTC given in examples. It took
around 26 sec.
Now, I increased the spark.default.parallelism to 8, but my performance
deteriorates. The same application takes 32 sec now.
I have read
Hi,
I am running Spark applications in GCE. I set up cluster with different
number of nodes varying from 1 to 7. The machines are single core machines.
I set the spark.default.parallelism to the number of nodes in the cluster
for each cluster. I ran the four applications available in Spark Examples
serve
> more meaningful trends and speedups.
>
> Joseph
>
> On Sat, Feb 28, 2015 at 7:26 AM, Deep Pradhan
> wrote:
>
>> Hi,
>> I am running Spark applications in GCE. I set up cluster with different
>> number of nodes varying from 1 to 7.
Hi,
I am running Spark in a single node cluster. I am able to run the codes in
Spark like SparkPageRank.scala, SparkKMeans.scala by the following command,
bin/run-examples org.apache.spark.examples.SparkPageRank
Now, I want to run the Pagerank.scala that is there in GraphX. Do we have a
similar co
Hi,
I am running Spark in a single node cluster. I am able to run the codes in
Spark like SparkPageRank.scala, SparkKMeans.scala by the following command,
bin/run-examples org.apache.spark.examples.SparkPageRank
Now, I want to run the Pagerank.scala that is there in GraphX. Do we have a
similar co
example folder so that we save on the length of
the command that we have to give in order to run algorithms on graphx?
Thank You
On Sun, Aug 3, 2014 at 1:50 AM, Ankur Dave wrote:
> At 2014-08-02 21:29:33 +0530, Deep Pradhan
> wrote:
> > How should I run graphx codes?
>
> At
example folder so that we save on the length of
the command that we have to give in order to run algorithms on graphx?
Thank You
On Sun, Aug 3, 2014 at 11:41 AM, Deep Pradhan
wrote:
> I am aware of how to run the LiveJournalPageRank
> However, I tried what Ankur had suggested, and I g
I have a single node cluster on which I have Spark running. I ran some
graphx codes on some data set. Now when I stop all the workers in the
cluster (sbin/stop-all.sh), the codes still run and gives the answers. Why
is it so? I mean does graphx run even without Spark coming up?
Same thing even whil
s this mean? I can work even without Spark coming up? Does the same
thing happen even if I have a multi-node cluster?
Thank You
On Sun, Aug 3, 2014 at 2:24 PM, Ankur Dave wrote:
> At 2014-08-03 13:14:52 +0530, Deep Pradhan
> wrote:
> > I have a single node cluster on which I have Sp
Is there any way to time the execution of GraphX codes?
Thank You
Hi,
Spark can run on top of HDFS.
While Spark talks about the RDDs which do not need replication because the
partitions can be built with the help of lineage. But, HDFS inherently has
replication. How do these two concepts go together?
Thank You
ication.
>
>
> On Mon, Aug 4, 2014 at 5:51 PM, Deep Pradhan
> wrote:
>
>> Hi,
>> Spark can run on top of HDFS.
>> While Spark talks about the RDDs which do not need replication because
>> the partitions can be built with the help of lineage. But, HDFS inherent
Hi,
I am using a single node Spark cluster on HDFS. When I was going through
the SparkPageRank.scala code, I came across the following line:
*val lines = ctx.textFile(args(0), 1)*
where, args(0) is the path of the input file from the HDFS, and the second
argument is the minimum split of Hadoop R
I am getting the following error while doing SPARK_HADOOP_VERSION=2.3.0
sbt/sbt/package
java.io.IOException: Cannot run program
"/home/deep/spark-1.0.0/usr/lib/jvm/java-7-oracle/bin/javac": error=2, No
such file or directory
...
[error] (core/compile:compile) java.io.IOException: Cannot ru
rt
> JAVA_HOME=usr/lib/jvm/java-7-oracle'
>
>
> On Fri, Aug 15, 2014 at 10:09 PM, Deep Pradhan
> wrote:
>
>> I am getting the following error while doing SPARK_HADOOP_VERSION=2.3.0
>> sbt/sbt/package
>>
>> java.io.IOException: Cannot run program
>&
Hi,
I am just playing around with the codes in Spark.
I am printing out some statements of the codes given in Spark so as to see
how it looks.
Every time I change/add something to the code I have to run the command
*SPARK_HADOOP_VERSION=2.3.0 sbt/sbt assembly*
which is tiresome at times.
Is there
Hi,
I was going through the SparkPageRank code and want to see the intermediate
steps, like the RDDs formed in the intermediate steps.
Here is a part of the code along with the lines that I added in order to
print the RDDs.
I want to print the "*parts*" in the code (denoted by the comment in Bold
l
parameter pf.parts.collect().foreach(println) *
On Sun, Aug 24, 2014 at 8:27 PM, Jörn Franke wrote:
> Hi,
>
> What kind of error do you receive?
>
> Best regards,
>
> Jörn
> Le 24 août 2014 08:29, "Deep Pradhan" a écrit
> :
>
> Hi,
>> I was going t
Hi,
I have an input file of a graph in the format
When I use sc.textFile, it will change the entire text file into an RDD.
How can I transform the file into key, value pair and then eventually into
paired RDDs.
Thank You
println(parts(0)) does not solve the problem. It does not work
On Mon, Aug 25, 2014 at 1:30 PM, Sean Owen wrote:
> On Mon, Aug 25, 2014 at 7:18 AM, Deep Pradhan
> wrote:
> > When I add
> >
> > parts(0).collect().foreach(println)
> >
> > parts(1).collect().
I have the following code
*val nodes = lines.map(s =>{val fields = s.split("\\s+")
(fields(0),fields(1))}).distinct().groupByKey().cache()*
and when I print out the nodes RDD I get the following
*(4,ArrayBuffer(1))(2,ArrayBuffer(1))(3,ArrayBuffer(1))(1,ArrayBuffer(3, 2,
Hi,
I have a RDD of key-value pairs. Now I want to find the "key" for which the
"values" has the largest number of elements. How should I do that?
Basically I want to select the key for which the number of items in values
is the largest.
Thank You
Hi,
I have the following ArrayBuffer:
*ArrayBuffer(5,3,1,4)*
Now, I want to get the number of elements in this ArrayBuffer and also the
first element of the ArrayBuffer. I used .length and .size but they are
returning 1 instead of 4.
I also used .head and .last for getting the first and the last
gt; val a = ArrayBuffer(5,3,1,4)
> a: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer(5, 3, 1, 4)
>
> scala> a.head
> res2: Int = 5
>
> scala> a.tail
> res3: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer(3, 1, 4)
>
> scala> a.length
> res4: Int = 4
>
&g
Hi,
I have the following ArrayBuffer
*ArrayBuffer(5,3,1,4)*
Now, I want to iterate over the ArrayBuffer.
What is the way to do it?
Thank You
Hi,
Does Spark support recursive calls?
Hi,
I have an input file which consists of
I have created and RDD consisting of key-value pair where key is the node
id and the values are the children of that node.
Now I want to associate a byte with each node. For that I have created a
byte array.
Every time I print out the key-value pair in th
Hi,
I have an array of bytes and I have filled the array with 0 in all the
postitions.
*var Array = Array.fill[Byte](10)(0)*
Now, if certain conditions are satisfied, I want to change some elements of
the array to 1 instead of 0. If I run,
*if (Array.apply(index)==0) Array.apply(index) = 1*
it
Hi,
I have "s" as an Iterable of String.
I also have "arr" as an array of bytes. I want to set the 's' position of
the array 'arr' to 1.
In short, I want to do
arr(s) = 1 // algorithmic notation
I tried the above but I am getting type mismatch error
How should I do this?
Thank You
I want to create a temporary variables in a spark code.
Can I do this?
for (i <- num)
{
val temp = ..
{
do something
}
temp.unpersist()
}
Thank You
Best Regards
>
> On Thu, Sep 11, 2014 at 3:26 PM, Deep Pradhan
> wrote:
>
>> I want to create a temporary variables in a spark code.
>> Can I do this?
>>
>> for (i <- num)
>> {
>> val temp = ..
>>{
>>do something
>>}
>> temp.unpersist()
>> }
>>
>> Thank You
>>
>
>
There is one thing that I am confused about.
Spark has codes that have been implemented in Scala. Now, can we run any
Scala code on the Spark framework? What will be the difference in the
execution of the scala code in normal systems and on Spark?
The reason for my question is the following:
I had
e abstractions introduced by Spark.
>
> An Int is just a Scala Int. You can't call unpersist on Int in Scala, and
> that doesn't change in Spark.
>
> On Fri, Sep 12, 2014 at 12:33 PM, Deep Pradhan
> wrote:
>
>> There is one thing that I am confused about.
>> Spar
ark is an application as far as
> scala is concerned - there is no compilation (except of course, the scala,
> JIT compilation etc).
>
> On Fri, Sep 12, 2014 at 8:04 PM, Deep Pradhan
> wrote:
>
>> I know that unpersist is a method on RDD.
>> But my confusion is that, w
2/technical-sessions/presentation/zaharia
>
> On Sat, Sep 13, 2014 at 12:06 AM, Deep Pradhan
> wrote:
>
>> Take for example this:
>> I have declared one queue *val queue = Queue.empty[Int]*, which is a
>> pure scala line in the program. I actually want the queue to be a
)*
*val rootNode = nodeSizeTuple.top(1)(Ordering.by(f => f._2))*
The nodeSizeTuple is an RDD,but rootNode is an array. Here I have used all
RDD operations, but I am getting an array.
What about this case?
On Sat, Sep 13, 2014 at 11:45 AM, Deep Pradhan
wrote:
> Is it always true that whenever we apply op
Hi,
We all know that RDDs are immutable.
There are not enough operations that can achieve anything and everything on
RDDs.
Take for example this:
I want an Array of Bytes filled with zeros which during the program should
change. Some elements of that Array should change to 1.
If I make an RDD with
Hi,
I want to make the following changes in the RDD (create new RDD from the
existing to reflect some transformation):
In an RDD of key-value pair, I want to get the keys for which the values
are 1.
How to do this using map()?
Thank You
Hi,
Is it always possible to get one RDD from another.
For example, if I do a *top(K)(Ordering)*, I get an Int right? (In my
example the type is Int). I do not get an RDD.
Can anyone explain this to me?
Thank You
Can we iterate over RDD of Iterable[String]? How do we do that?
Because the entire Iterable[String] seems to be a single element in the RDD.
Thank You
Hi,
Can Spark achieve whatever GraphX can?
Keeping aside the performance comparison between Spark and GraphX, if I
want to implement any graph algorithm and I do not want to use GraphX, can
I get the work done with Spark?
Than You
Hi,
The collect method returns an Array. If I have a huge set of data and I do
something like the following:
*val rdd2 = rdd1.mapValues(v => 0).collect *//where rdd1 is some key-value
pair RDD
As per my understanding, this will return an array(String, Int) and if my
data is huge this will return
Has anyone implemented Queues using RDDs?
Thank You
Hi,
Can we pass RDD to functions?
Like, can we do the following?
*def func (temp: RDD[String]):RDD[String] = {*
*//body of the function*
*}*
Thank You
Hi,
I am using Spark 1.0.0 and Scala 2.10.3.
I want to use toLocalIterator in a code but the spark shell tells
*not found: value toLocalIterator*
I also did import org.apache.spark.rdd but even after this the shell tells
*object toLocalIterator is not a member of package org.apache.spark.rdd*
; method of an existing RDD if you have one.
>
> - Patrick
>
> On Thu, Nov 13, 2014 at 10:21 PM, Deep Pradhan
> wrote:
> > Hi,
> >
> > I am using Spark 1.0.0 and Scala 2.10.3.
> >
> > I want to use toLocalIterator in a code but the spark shell tells
How to create an empty RDD in Spark?
Thank You
. Is there another way to do this?
Thank you
On Fri, Nov 14, 2014 at 3:39 PM, Deep Pradhan
wrote:
> How to create an empty RDD in Spark?
>
> Thank You
>
Hi,
Is there any way to know which of my functions perform better in Spark? In
other words, say I have achieved same thing using two different
implementations. How do I judge as to which implementation is better than
the other. Is processing time the only metric that we can use to claim the
goodnes
Hi,
I was going through the graphx section in the Spark API in
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.lib.ShortestPaths$
Here, I find the word "landmark". Can anyone explain to me what is landmark
means. Is it a simple English word or does it mean somethi
Hi,
I just ran the PageRank code in GraphX with some sample data. What I am
seeing is that the total rank changes drastically if I change the number of
iterations from 10 to 100. Why is that so?
Thank You
Hi,
I am using Spark-1.0.0. There are two GraphX directories that I can see here
1. spark-1.0.0/examples/src/main/scala/org/apache/sprak/examples/graphx
which contains LiveJournalPageRank,scala
2. spark-1.0.0/graphx/src/main/scala/org/apache/sprak/graphx/lib which
contains Analy
So "landmark" can contain just one vertex right?
Which algorithm has been used to compute the shortest path?
Thank You
On Tue, Nov 18, 2014 at 2:53 PM, Ankur Dave wrote:
> At 2014-11-17 14:47:50 +0530, Deep Pradhan
> wrote:
> > I was going through the graphx secti
There are no vertices of zero outdegree.
The total rank for the graph with numIter = 10 is 4.99 and for the graph
with numIter = 100 is 5.99
I do not know why so much variation.
On Tue, Nov 18, 2014 at 3:22 PM, Ankur Dave wrote:
> At 2014-11-18 12:02:52 +0530, Deep Pradhan
> wrote:
>
Does Bellman-Ford give the best solution?
On Tue, Nov 18, 2014 at 3:27 PM, Ankur Dave wrote:
> At 2014-11-18 14:59:20 +0530, Deep Pradhan
> wrote:
> > So "landmark" can contain just one vertex right?
>
> Right.
>
> > Which algorithm has been used t
=EdgePartition2D*
Now, how do I run the LiveJournalPageRank.scala that is there in 1?
On Tue, Nov 18, 2014 at 2:51 PM, Deep Pradhan
wrote:
> Hi,
> I am using Spark-1.0.0. There are two GraphX directories that I can see
> here
>
> 1. spark-1.0.0/examples/src/main/scala/org/apache/sprak/
1 - 100 of 119 matches
Mail list logo