You can't set it to less than 1
Just set it to max int if that's really what you want to do
On Mon, Aug 31, 2015 at 6:00 AM, Shushant Arora
wrote:
> Say if my cluster takes long time for rebalance for some reason
> intermittently . So to handle that Can I have infinite retries instead of
> kill
Say if my cluster takes long time for rebalance for some reason
intermittently . So to handle that Can I have infinite retries instead of
killing the app? What should be the value of retries (-1) will work or
something else ?
On Thu, Aug 27, 2015 at 6:46 PM, Cody Koeninger wrote:
> Your kafka br
Map is lazy. You need an actual action, or nothing will happen. Use
foreachPartition, or do an empty foreach after the map.
On Thu, Aug 27, 2015 at 8:53 AM, Ahmed Nawar wrote:
> Dears,
>
> I needs to commit DB Transaction for each partition,Not for each row.
> below didn't work for me.
>
>
Dears,
I needs to commit DB Transaction for each partition,Not for each row.
below didn't work for me.
rdd.mapPartitions(partitionOfRecords => {
DBConnectionInit()
val results = partitionOfRecords.map(..)
DBConnection.commit()
})
Best regards,
Ahmed Atef Nawwar
Data Management
Your kafka broker died or you otherwise had a rebalance.
Normally spark retries take care of that.
Is there something going on with your kafka installation, that rebalance is
taking especially long?
Yes, increasing backoff / max number of retries will "help", but it's
better to figure out what's