them, a task gets multiple
> partitions (from different topics) assigned.
>
>
> -Matthias
>
> On 3/9/18 2:42 PM, Stas Chizhov wrote:
> >> Also note, that the processing order might slightly differ if you
> > process the same data twice
> >
> > Is th
" against different
> processing orders (ie, if there are multiple input partitions, you might
> get data first for partition 0 and there for partition 1 or the other
> way round -- the order per partitions is guaranteed to be in offset order).
>
>
> -Matthias
>
>
>
>
Hi,
We do have mesos based infrastructure to run our java-based microservices
and we'd like to use it for deploying connectors as well (with benefits of
reusing deployment specific knowledge we already have, isolating the load
and in general pretty much the same reasons Kafka Streams was designed
ks for this feature. If
> there is a JIRA, maybe somebody picks it up :)
>
>
> -Matthias
>
> On 3/3/18 6:51 AM, Stas Chizhov wrote:
> > Hi,
> >
> > There seems to be no way to commit custom metadata along with offsets
> from
> > within Kafka Streams.
> > Are there any plans to expose this functionality or have I missed
> something?
> >
> > Best regards,
> > Stanislav.
> >
>
>
Hi,
There seems to be no way to commit custom metadata along with offsets from
within Kafka Streams.
Are there any plans to expose this functionality or have I missed something?
Best regards,
Stanislav.
Hi, it looks like https://issues.apache.org/jira/browse/KAFKA-5970. Try
restarting broker 1.
Best regards,
Stanislav.
2017-11-10 14:00 GMT+01:00 Vitaliy Semochkin :
> Hi,
>
> I have a cluster with 3 brokers (0.11)
> when I create a topic with min.insync.replicas=2 and replication-factor 2
> I se
Hi,
I would like to mirror a topic with avro messages from a production to
staging cluster, each cluster having its own schema registry. I would like
messages being copied to use schemas from target cluster registry and those
schemas copied from source registry if needed.
I guess I could do it wit
Hi,
You can get lag as a metric here:
https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#metrics()
.
BR,
Stas.
2017-10-13 2:08 GMT+02:00 Stephen Powis :
> So I have the same use case as the original poster and had the same issue
> with the older 0.10.x cl
r?
>
>
> Guozhang
>
> On Fri, Oct 6, 2017 at 1:10 PM, Stas Chizhov wrote:
>
> > Thank you!
> >
> > I guess eventually consistent reads might be a reasonable trade off if
> you
> > can get ability to serve reads without downtime in some cases.
> >
&g
the way standby replicas are just extra consumers/processors of input
> >> topics? Or is there some custom protocol for sinking the state?
>
> We use a second consumer, that reads the changlog topic (that is written
> by the active store) to update the hot standby.
>
>
> -
02]).
> When crashing other brokers - there is nothing special happening, lag
> growing little bit but nothing crazy (e.g. thousands, not millions).
>
> Is it sounds suspicious?
>
> On Fri, Oct 6, 2017 at 9:23 PM, Stas Chizhov wrote:
>
> > Ted: when choosing earliest/la
ity at some point though -- but there are no
> concrete plans atm. Contributions are always welcome of course :)
>
>
> -Matthias
>
> On 10/6/17 4:18 AM, Stas Chizhov wrote:
> > Hi
> >
> > Is there a way to serve read read requests from standby replicas?
> >
ka, it commits offsets "manually" for us after
> event
> > handler completed. So it's kind of automatic once there is constant
> stream
> > of events (no idle time, which is true for us). Though it's not what pure
> > kafka-client calls "automatic&q
tup then?
>
> On Fri, Oct 6, 2017 at 6:58 PM, Ted Yu wrote:
>
> > Stas:
> > bq. using anything but none is not really an option
> >
> > If you have time, can you explain a bit more ?
> >
> > Thanks
> >
> > On Fri, Oct 6, 2017 at 8:55 AM, Sta
If you set auto.offset.reset to none next time it happens you will be in
much better position to find out what happens. Also in general with current
semantics of offset reset policy IMO using anything but none is not really
an option unless it is ok for consumer to loose some data (latest) or
repro
Hi
Is there a way to serve read read requests from standby replicas?
StreamsMeatadata does not seem to provide standby end points as far as I
can see.
Thank you,
Stas
I would set it to Integer.MAX_VALUE
2017-10-05 19:29 GMT+02:00 Dmitriy Vsekhvalnov :
> I see, but producer.retries set to 10 by default.
>
> What value would you recommend to survive random broker crashes?
>
> On Thu, Oct 5, 2017 at 8:24 PM, Stas Chizhov wrote:
>
> >
;t see retries property in StreamConfig
> class.
>
> On Thu, Oct 5, 2017 at 7:55 PM, Stas Chizhov wrote:
>
> > Hi
> >
> > Have you set replication.factor and retries properties?
> >
> > BR
> >
> > tors 5 okt. 2017 kl. 18:45 skrev Dmit
Hi
Have you set replication.factor and retries properties?
BR
tors 5 okt. 2017 kl. 18:45 skrev Dmitriy Vsekhvalnov :
> Hi all,
>
> we were testing Kafka cluster outages by randomly crashing broker nodes (1
> of 3 for instance) while still keeping majority of replicas available.
>
> Time to time
ix is needed for the 0.11.0 branch as well.
> >
> > Ismael
> >
> > On Mon, Oct 2, 2017 at 11:28 AM, Stas Chizhov
> wrote:
> >
> > > Hi,
> > >
> > > We run 0.11.01 and there was a problem with 1 ReplicationFetcher on one
> > of
> &g
Hi,
We run 0.11.01 and there was a problem with 1 ReplicationFetcher on one of
the brokers - it experience out of order sequence problem for one
topic/partition and was stopped. It stayed stopped over the weekend. During
this time log cleanup was working and by now it has cleaned up all the data
i
hat makes sense. Can you create a JIRA for this? Thanks.
>
> -Matthias
>
> On 9/27/17 2:54 PM, Stas Chizhov wrote:
> > Thanks, that comment actually mad its way to the documentation already.
> > Apparently none of that was related. It was a leak - I was not closin
m.atlassian.jira.
> plugin.system.issuetabpanels:comment-tabpanel#comment-15984467
>
> On Wed, Sep 27, 2017 at 12:44 PM, Stas Chizhov wrote:
>
> > Hi,
> >
> > I am running a simple kafka streams app (0.11.0.1) that counts messages
> per
> > hour per part
Hi,
I am running a simple kafka streams app (0.11.0.1) that counts messages per
hour per partition. The app runs in a docker container with a memory limit
set, which is always reached by the app within few minutes and then
container is killed. After running it with various number of instances,
dif
Hi,
We are running confluent s3 conector (3.2.0) and we observed a sink task
not being able to commit offsets after rebalance for like a week.It spits
"WorkerSinkTask:337 - Ignoring invalid task provided offset -- partition
not assigned" every time new file was written to S3. Eventually after 7
da
Thanks!
2017-09-20 11:37 GMT+02:00 Stas Chizhov :
> Hi!
>
> I am wondering if there are broker/client metrics for:
> - client version (to keep track of clients that needs an upgrade)
> - committed offsets (to detect situations when commits fail systematically
> with everyt
Hi!
I am wondering if there are broker/client metrics for:
- client version (to keep track of clients that needs an upgrade)
- committed offsets (to detect situations when commits fail systematically
with everything else being ok)
Thank you,
Stanislav.
Hi,
I have to process a topic with few thousand messages and a dozen partitions
from the very beginning. This topic is manually populated before
consumption. In this setup a consumer consuming from several partitions at
the same time tend to consume assigned partitions sequentially: first all
mess
Hi,
We've written a few-liner command that reads offsets for the consumer group
we want to copy and commits those for a new group. That way you can inspect
"__consumer_offsets" topic and make sure everything is correct before you
start consuming messages.
BR
Stanislav.
2017-04-25 22:02 GMT+02:0
Yes that should work. Thanks a lot!
2017-04-25 21:44 GMT+02:00 Gwen Shapira :
> We added a Byte Converter which essentially does no conversion. Is this
> what you are looking for?
>
> https://issues.apache.org/jira/browse/KAFKA-4783
>
> On Tue, Apr 25, 2017 at 11:54 AM, S
Hi,
I have a kafka topic with avro messages + schema registry, which is being
backed up into s3 as a set of avro files. I need to be able to restore a
subset of those files into a new topic in the original format with schemas
published into a schema registry. Am I right that at the moment there is
31 matches
Mail list logo