all the related/superceded-by/etc JIRA issues, but it's
not clear if it's actually solved in the current ZK version. Any info about how
you've done Zookeeper cluster migrations in the past? We're running version
3.3.5.
Thanks,
Marcos Juarez
is the algorithm/pattern used to
decide the consumer offset partition, and is this something we can change
or influence?
Thanks,
Marcos Juarez
s.
Anyway, thought I'd post a follow-up to the original question, since it
might help somebody else in the future.
Marcos
On Mon, Oct 2, 2017 at 6:57 PM, Marcos Juarez wrote:
> I apologize for sending this to dev. Reposting to the Users mailing list.
>
> -- Forwarded mess
get consumer offsets
from Kafka?
Thanks,
Marcos Juarez
ore stable.
Thanks,
Marcos Juarez
On Mon, May 16, 2016 at 12:28 PM, Liquan Pei wrote:
> Hi Matteo,
>
> There was a bug in the 0.9.1 such that task.close() can be invoked both in
> the Worker thread and Herder thread. There can be a race condition that
> consumer.close() is invok
right now to mitigate the issue is adding some context to
consumers (they all live within a single app), so that we'll be able to
pause consumption if lag becomes too high, and let the other consumers
catch up.
Any thoughts/suggestions on that?
Thanks,
Marcos Juarez
On Sun, Jan 24, 20
ead is what introduced the
deadlock problem.
That same broker is still in the deadlock scenario, we haven't restarted
it, so let me know if you'd like more info/log/stats from the system before
we restart it.
Thanks,
Marcos Juarez
* h...@confluent.io (650)924-2670
> */
>
> On Thu, Nov 3, 2016 at 10:02 AM, Marcos Juarez
> wrote:
>
> > We're running into a recurrent deadlock issue in both our production
> and
> > staging clusters, both using the latest 0.10.1 release.
te, but it should still reduce the likelihood.
> 3. Out of curiosity, what is the size of your cluster and how many
> consumers do you have in your cluster?
>
> Thanks!
> Jason
>
> On Thu, Nov 3, 2016 at 1:32 PM, Marcos Juarez wrote:
>
> > Just to expand on Lawrence
nto the 0.10.1 branch and you can build from there if you
> need something in a hurry.
>
> Thanks,
> Jason
>
> On Fri, Nov 4, 2016 at 9:48 AM, Marcos Juarez wrote:
>
> > Jason,
> >
> > Thanks for that link. It does appear to be a very similar issue, if not
t deadlock segment, and concatenated them all
together in the attached text file.
Do you think this is something I should add to KAFKA-3994 ticket? Or is
the information in that ticket enough for now?
Thanks,
Marcos
On Fri, Nov 4, 2016 at 2:05 PM, Marcos Juarez wrote:
> That's g
icket.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Nov 7, 2016 at 9:47 AM, Marcos Juarez wrote:
>
> > We ran into this issue several more times over the weekend. Basically,
> > FDs are exhausted so fast now, we can't even get to the server in time,
> t
messages in the console consumer with the option *--property
print.key=false*. However, we can't figure out a way to turn off key
deserialization (if that is what is causing this) on the kafka
connect/connector side.
We're using Kafka 1.1.1, and all the packages are Confluent platform 4.1.2.
Any help would be appreciated.
Thanks,
Marcos Juarez
s somewhat
sensitive to lag, so we'd like to keep the rebalance time to a minimum.
With that context, what kafka configs should we focus on on the consumer
side (and maybe the broker side?) that would enable us to minimize the time
spent on the rebalance?
Thanks,
Marcos Juarez
We're doing some testing on Kafka 1.1 brokers in AWS EC2. Specifically,
we're cleanly shutting brokers down for 5 mins or more, and then restarting
them, while producing and consuming from the cluster all the time. In
theory, this should be relatively seamless to both producers and consumers.
Howe
Is there any concern/issues we should be aware of when
downgrading the cluster like that?
Thanks,
Marcos Juarez
On Mon, Nov 7, 2016 at 5:47 PM, Marcos Juarez wrote:
> Thanks Becket.
>
> I was working on that today. I have a working jar, created from the
> 0.10.1.0 branch, an
ld be
> fine.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
> On Fri, Nov 11, 2016 at 9:51 AM, Marcos Juarez wrote:
>
> > Becket/Jason,
> >
> > We deployed a jar with the base 0.10.1.0 release plus the KAFKA-3994
> patch,
> > but we're seeing the
lease?
Thanks for your help!
Marcos
On Fri, Nov 11, 2016 at 11:56 AM, Marcos Juarez wrote:
> Thanks Becket,
>
> We should get a full thread dump the next time, so I'll send it as soon
> that happens.
>
> Marcos
>
> On Fri, Nov 11, 2016 at 11:27 AM, Becket Qin wrote:
&g
wse/KAFKA-4674
Jun Ma, what exactly did you do to failover the controller to a new broker?
If that works for you, I'd like to try it in our staging clusters.
Thanks,
Marcos Juarez
On Wed, Mar 22, 2017 at 11:55 AM, Jun MA wrote:
> I have similar issue with our cluster. We don’t kn
Ali,
I don't know of proper benchmarks out there, but I've done some work in
this area, when trying to determine what hardware to get for particular use
cases. My answers are in-line:
On Mon, Apr 10, 2017 at 7:05 PM, Ali Nazemian wrote:
> Hi all,
>
> I was wondering if there is any benchmark o
ould be better for
> a real-time application with a hard SLA?
>
> Regards,
> Ali
>
> On Thu, Apr 13, 2017 at 10:57 AM, Marcos Juarez wrote:
>
> > Ali,
> >
> > I don't know of proper benchmarks out there, but I've done some work in
> > this area
use the lower level Streams API for that, can you point me to a
starting point? I've been reading docs and javadocs for a while now, and
I'm not sure where I would add/configure this.
Thanks!
Marcos Juarez
cally move leadership of a
topic/partition to another replica in the ISR? I'd like to understand how
exactly Kafka does this, to see if we can provision an instance type for
those test Kakfa clusters that can handle the load without moving
leadership around.
Thanks,
Marcos Juarez
s indeed a problem with a brief network partition, should I have
seen that "broker failure callback" message somewhere in the logs? And
does that mean that Kafka can't withstand network partitions at all, and
shouldn't be used on unreliable cloud infrastructure?
Thanks for yo
Thank you Joel, will go thorough those docs and make sure our settings are
appropriate on these instances.
Marcos Juarez
On Tue, Aug 5, 2014 at 5:58 PM, Joel Koshy wrote:
> The session expirations (in the log you pasted) lead to the broker
> losing its registration from zookeeper
am of realtime data being sent to them.
Is there a way to throttle Kafka replication between nodes, so that instead
of it going full blast, it will replicate at a fixed rate in megabytes or
activities/batches per second? Or maybe is this planned for a future
release, maybe 0.9?
Thanks,
Marcos Juarez
Thanks for your response Jun.
JIRA has been filed (see link below). Please let me know if I should add
more details/context:
https://issues.apache.org/jira/browse/KAFKA-1464
Thanks,
Marcos Juarez
On Wed, May 21, 2014 at 8:40 AM, Jun Rao wrote:
> We don't have such throttling r
27 matches
Mail list logo