Hi,
We use ec2 to run batch spark jobs to filter and process our data and
sometimes we need to replace the host or deploy a new fleet. Since we run
the driver in cluster mode and if the host goes down it will be
detrimental. We also use some native code to make sure our table is
modified by only
ing information.
>
> Thanks,
> Wenchen
>
> On Wed, Dec 18, 2019 at 3:22 AM aakash aakash
> wrote:
>
>> Thanks Andrew!
>>
>> It seems there is a drastic change in 3.0, going through it.
>>
>> -Aakash
>>
>> On Tue, Dec 17, 2019 at 11:01 AM Andrew Me
Thanks Andrew!
It seems there is a drastic change in 3.0, going through it.
-Aakash
On Tue, Dec 17, 2019 at 11:01 AM Andrew Melo wrote:
> Hi Aakash
>
> On Tue, Dec 17, 2019 at 12:42 PM aakash aakash
> wrote:
>
>> Hi Spark dev folks,
>>
>> First of all kudo
Hi Spark dev folks,
First of all kudos on this new Data Source v2, API looks simple and it
makes easy to develop a new data source and use it.
With my current work, I am trying to implement a new data source V2 writer
with Spark 2.3 and I was wondering how I will get the info about partition
by c
; slim at this point, it's a year out of date and wouldn't be a drop-in
> dependency change.
>
>
> On Tue, Nov 15, 2016 at 5:50 PM, aakash aakash
> wrote:
> >
> >
> >> You can use the 0.8 artifact to consume from a 0.9 broker
> >
> > We ar
evelopment at this point.
>
> You can use the 0.8 artifact to consume from a 0.9 broker
>
> Where are you reading documentation indicating that the direct stream
> only runs on the driver? It runs consumers on the worker nodes.
>
>
> On Tue, Nov 15, 2016 at 10:58 AM,
Re-posting it at dev group.
Thanks and Regards,
Aakash
-- Forwarded message --
From: aakash aakash
Date: Mon, Nov 14, 2016 at 4:10 PM
Subject: using Spark Streaming with Kafka 0.9/0.10
To: user-subscr...@spark.apache.org
Hi,
I am planning to use Spark Streaming to consume