Hi,
I am trying one transformation by calling scala method
this scala method returns MutableList[AvroObject]
def processRecords(id: String, list1: Iterable[(String, GenericRecord)]):
scala.collection.mutable.MutableList[AvroObject]
Hence, the output of transaformation is RDD[MutableList[AvroOb
Hi,
I am trying one transformation by calling scala method
this scala method returns MutableList[AvroObject]
def processRecords(id: String, list1: Iterable[(String, GenericRecord)]):
scala.collection.mutable.MutableList[AvroObject]
Hence, the output of transaformation is RDD[MutableList[AvroOb
Hi All,
I am trying to run Kafka Word Count Program.
please find below, the link for the same
https://github.com/apache/spark/blob/master/examples/scala-2.10/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java
I have set spark master to setMaster("local[*]")
and I have sta
append spark to my log.text file
Spark program gives output as
spark 1
which should be spark 3.
So how to handle this in Spark code.
Thanks and regards
Shweta Jadhav
-Sean Owen wrote: -
To: Jadhav Shweta
From: Sean Owen
Date: 02/02/2015 04:13PM
Subject: Re: Java Kafka Word Count Issue
es with the previous running count to get the new count
Thanks and regards
Shweta Jadhav
-VISHNU SUBRAMANIAN wrote: -
To: Jadhav Shweta
From: VISHNU SUBRAMANIAN
Date: 02/02/2015 04:39PM
Cc: "user@spark.apache.org"
Subject: Re: Java Kafka Word Count Issue
You can use updateStateB
Hi,
I am running streaning word count program in Spark Standalone mode cluster,
having four machines in cluster.
public final class JavaKafkaStreamingWordCount {
private static final Pattern SPACE = Pattern.compile(" ");
static transient Configuration conf;
private JavaKa