ould run?
Or maybe compaction doesn't delete old removed keys for some reason?
Thank you for your attention.
Cheers
Iain Cundy
This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp
Hi Vinti
I don’t program in scala, but I think you’ve changed the meaning of the current
variable – look again at what it state and what is new data.
Assuming it works like the Java API, to use this function to maintain State you
must call State.update, while you can return anything, not just t
Hi Manas
I saw a very similar problem while using mapWithState. Timeout on BlockManager
remove leading to a stall.
In my case it only occurred when there was a big backlog of micro-batches,
combined with a shortage of memory. The adding and removing of blocks between
new and old tasks was inte
Hi Abhi
The concept is what you want – if you set StateSpec timeout to a Duration of 10
minutes then any keys no seen for more than 10 minutes will be deleted.
However you did say “exactly” and specifically mention “removed from memory” in
which case you may be interested in the much more compl
er memory, which you can monitor in the task data in
ApplicationMaster web UI.
Cheers
Iain
From: Ofir Kerker [mailto:ofir.ker...@gmail.com]
Sent: 07 April 2016 17:06
To: Iain Cundy; user@spark.apache.org
Subject: Re: mapWithState not compacting removed state
Hi Iain,
Did you manage to solve
Hi All
I'm trying to move from MapWithState to Structured Streaming v2.2.1, but I've
run into a problem.
To convert from Kafka data with a binary (protobuf) value to SQL I'm taking the
dataset from readStream and doing
Dataset s = dataset.selectExpr("timestamp", "CAST(key as string)",
"ETBi