did the same restoration process, after the process thread B
continues to process data and update the state store, while at the same
time writes more messages to the changelog (so its log end offset has
incremented).
5. After a while A resumes from the long GC, not knowing it has actually be
kicked out
java.lang.IllegalStateException: task [0_6] Log end offset of
RtDetailBreakoutProcessor-table_stream-changelog-6 should not change while
restoring: old end offset 26883455, current offset 2
6883467
at
org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState
ete stacktrace
>
> User provided listener
> org.apache.kafka.streams.processor.internals.StreamThread$1
> for group test failed on partition assignment
> java.lang.IllegalStateException: Log end offset should not change while
> restoring
> at org.apache.kafka.streams.processor.internals.
> Proce
listener
org.apache.kafka.streams.processor.internals.StreamThread$1
for group test failed on partition assignment
java.lang.IllegalStateException: Log end offset should not change while
restoring
at org.apache.kafka.streams.processor.internals.
ProcessorStateManager.restoreActiveState
StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG
The only one cache parameter. Do you use code auto-completion? It's also
in the docs:
http://docs.confluent.io/current/streams/developer-guide.html#optional-configuration-parameters
-Matthias
On 12/12/16 8:22 PM, Jon Yeargers wrote:
> What is the
What is the specific cache config setting?
On Mon, Dec 12, 2016 at 1:49 PM, Matthias J. Sax
wrote:
> We discovered a few more bugs and a bug fix release 0.10.1.1 is planned
> already.
>
> The voting started for it, and it should get release the next weeks.
>
> If you issues is related to this ca
We discovered a few more bugs and a bug fix release 0.10.1.1 is planned
already.
The voting started for it, and it should get release the next weeks.
If you issues is related to this caching problem, disabling the cache
via StreamsConfig should fix the problem for now. Just set the cache
size to
Im seeing this error occur more frequently of late. I ran across this
thread:
https://groups.google.com/forum/#!topic/confluent-platform/AH5QClSNZBw
The implication from the thread is that a fix is available. Where can I get
it?
Hi team,
I was seeing an issue where a mirror maker attempted to commit an offset
for a partition that was ahead of log end offset and in the mean time the
leader of the partition was being restarted. In theory only committed
messages can be consumed by consumer which means the messages received
Bytes: 1 bytes; RequestInfo: [test,0] -> PartitionFetchInfo(0,1048576)
> (kafka.server.KafkaApis)
> Mon Nov 23 15:06:51 UTC 2015: kafka.common.KafkaException: Should not set log
> end offset on partition [test,0]'s local replica 1
> Mon Nov 23 15:06:51 UTC 2015: at
>
: [test,0] ->
PartitionFetchInfo(0,1048576) (kafka.server.KafkaApis)
Mon Nov 23 15:06:51 UTC 2015: kafka.common.KafkaException: Should not
set log end offset on partition [test,0]'s local replica 1
Mon Nov 23 15:06:51 UTC 2015: at
kafka.cluster.Replica.logEndOffset_$eq(Replica.scala:52)
Mon Nov
Replying 3 months later Sounds like
http://search-hadoop.com/m/uyzND1YlKxr5XZpK , Joe. No fix yes, as far as I
know.
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
On Mon, Jun 8, 2015 at 6:33 PM, joe smith
Hi,
Have notice an issue. We retrieve the
"kafka.log":type="Log",name="--LogEndOffset" 's Attribute:
Value and used it to calculate the lag from consumer's offset.
After we did a partition re-assignment, where partition leaders were changed,
some of the new leader's LogEndOffset were not updat
> >
>> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
>> >
>> > But we are running into ClosedChannelException for some of the topics.
>> We
>> > use Kafka for offset storage and version 0.8.2.1.
>> >
>> > What is the ideal way to compute the topic log end offset?
>> >
>> > --
>> > Regards
>> > Vamsi Subhash
>>
>>
>
>
> --
> Regards
> Vamsi Subhash
>
--
Regards
Vamsi Subhash
e end of a partition (consumed
> all messages) when using the high-level consumer?"
> http://search-hadoop.com/m/uyzND1Eb3e42NMCWl
>
> -James
>
> On May 10, 2015, at 11:48 PM, Achanta Vamsi Subhash <
> achanta.va...@flipkart.com> wrote:
>
> > Hi,
>
:48 PM, Achanta Vamsi Subhash
wrote:
> Hi,
>
> What is the best way for finding out the log end offset for a topic?
> Currently I am using the SimpleConsumer getLastOffset logic mentioned in:
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Exampl
Hi,
What is the best way for finding out the log end offset for a topic?
Currently I am using the SimpleConsumer getLastOffset logic mentioned in:
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
But we are running into ClosedChannelException for some of the topics
hInfo(0,1048576),[test3,2] ->
> > PartitionFetchInfo(0,1048576),[test3,19] ->
> > PartitionFetchInfo(0,1048576),[test3,25] ->
> > PartitionFetchInfo(0,1048576),[test3,20] ->
> > PartitionFetchInfo(0,1048576),[test3,14] ->
> > PartitionFetchInfo(0,1048576),[test2,0] ->
> &g
fo(0,1048576),[test3,16] ->
> PartitionFetchInfo(0,1048576),[test3,1] ->
> PartitionFetchInfo(0,1048576),[test3,10] ->
> PartitionFetchInfo(0,1048576),[test3,3] ->
> PartitionFetchInfo(0,1048576),[test3,4] ->
> PartitionFetchInfo(0,1048576),[tes
I found some logs like this before everything started to go wrong
...
[2014-12-02 07:08:11,722] WARN Partition [test3,13] on broker 2: No
checkpointed highwatermark is found for partition [test3,7]
(kafka.cluster.Partition)
[2014-12-02 07:08:11,722] WARN Partition [test3,7] on broker 2: No
checkpo
] -> PartitionFetchInfo(0,1048576)
(kafka.server.KafkaApis)
Dec 2 07:40:17 ubuntu supervisord: kafka-broker
kafka.common.KafkaException: Should not set log end offset on partition
[test3,22]'s local replica 4
Dec 2 07:40:17 ubuntu supervisord: kafka-broker #011at
kafka.cluster.Replica.
21 matches
Mail list logo