If "auto.create.topics.enable" is set to true in your configurations , any
producer/consumer or fetch request will create the topic again. Set it to
false and delete the topic.
-- Surendra Manchikanti
On Sat, Dec 10, 2016 at 10:59 AM, Todd Palino wrote:
> Are you running something else besides
Windows are created on demand, ie, each time a new record arrives and
there is no window yet for it, a new window will get created.
Windows are accepting data until their retention time (that you can
configure via .until()) passed. Thus, you will have many windows being
open in parallel.
If you r
After an hour: it briefly popped up with 1 instance 'applied' to all 10
partitions... then it went back to rebalance for 10-15 minutes.. followed
by a different instance on all partitions.. and then more rebalancing..
At no point (yet) have I seen the work get truly 'balanced' between all 5
instan
I changed 'num.standby.replicas' to '2'.
I started one instance and it immediately showed up in the
'kafka-consumer-groups .. --describe' listing.
So I started a second... and it quickly displaced the first... which never
came back.
Started a third.. same effect. Second goes away never to return
I've read this and still have more questions than answers. If my data skips
about (timewise) what determines when a given window will start / stop
accepting new data? What if Im reading data from some time ago?
On Sun, Dec 11, 2016 at 2:22 PM, Matthias J. Sax
wrote:
> Please have a look here:
>
I moved the state folder to a separate drive and linked out to it.
I'll try your suggestion and point directly.
On Sun, Dec 11, 2016 at 2:20 PM, Matthias J. Sax
wrote:
> I am not sure, but this might be related with your state directory.
>
> You use default directory that is located in /tmp --
I get this one quite a bit. It kills my app after a short time of running.
Driving me nuts.
On Sun, Dec 11, 2016 at 2:17 PM, Matthias J. Sax
wrote:
> Not sure about this one.
>
> Can you describe what you do exactly? Can you reproduce the issue? We
> definitely want to investigate this.
>
> -Mat
Yes- but not 100% repro. I seem to have several issues with start /
rebalance
On Sun, Dec 11, 2016 at 2:16 PM, Matthias J. Sax
wrote:
> Hi,
>
> this might be a recently discovered bug. Does it happen when you
> stop/restart your application?
>
>
> -Matthias
>
> On 12/10/16 1:42 PM, Jon Yeargers
No sure.
How big is your state? On rebalance, state stores might move from one
machine to another. To recreate the store on the new machine the
underlying changelog topic must be read. This can take some time -- an
hour seems quite long though...
To avoid long state recreation periods Kafka Strea
Please have a look here:
http://docs.confluent.io/current/streams/developer-guide.html#windowing-a-stream
If you have further question, just follow up :)
-Matthias
On 12/10/16 6:11 PM, Jon Yeargers wrote:
> Ive added the 'until()' clause to some aggregation steps and it's working
> wonders fo
I am not sure, but this might be related with your state directory.
You use default directory that is located in /tmp -- could it be, that
/tmp gets clean up and thus you loose files/directories?
Try to reconfigure your state directory via StreamsConfig:
http://docs.confluent.io/current/streams/d
Not sure about this one.
Can you describe what you do exactly? Can you reproduce the issue? We
definitely want to investigate this.
-Matthias
On 12/10/16 4:17 PM, Jon Yeargers wrote:
> (Am reporting these as have moved to 0.10.1.0-cp2)
>
> ERROR o.a.k.c.c.i.ConsumerCoordinator - User provided
Hi,
this might be a recently discovered bug. Does it happen when you
stop/restart your application?
-Matthias
On 12/10/16 1:42 PM, Jon Yeargers wrote:
> This came up a few times today:
>
> 2016-12-10 18:45:52,637 [StreamThread-1] ERROR
> o.a.k.s.p.internals.StreamThread - stream-thread [Stream
Hi Rob,
Do you have any further information you can provide? Logs etc?
Have you configured max.poll.interval.ms?
Thanks,
Damian
On Sun, 11 Dec 2016 at 20:30 Robert Conrad wrote:
> Hi All,
>
> I have a relatively complex streaming application that seems to struggle
> terribly with rebalance iss
Hi All,
I have a relatively complex streaming application that seems to struggle
terribly with rebalance issues while under load. Does anyone have any tips
for investigating what is triggering these frequent rebalances or
particular settings I could experiment with to try to eliminate them?
Origi
I don't know about speeding up rebalancing, and an hour seems to suggest
something is wrong with zookeeper or you're whole setup maybe. if it
becomes an unsolvable issue for you, you could try
https://github.com/gerritjvv/kafka-fast which uses a different model and
doesn't need balancing or rebalan
Is there some way to 'help it along'? It's taking an hour or more from when
I start my app to actually seeing anything consumed.
Plenty of CPU (and IOWait) during this time so I know it's doing
_something_...
Seeing this appearing somewhat frequently -
org.apache.kafka.streams.errors.ProcessorStateException: Error opening
store minute_agg_stream-201612100812 at location
/tmp/kafka-streams/MinuteAgg/1_9/minute_agg_stream/minute_agg_stream-201612100812
at
org.apache.kafka.streams.state.internals
18 matches
Mail list logo