Hi,
I am one of the maintainers of prometheus-kafka-consumer-group-exporter[1],
which exports consumer group offsets and lag to Prometheus. The way we
currently scrape this information is by periodically executing
`kafka-consumer-groups.sh --describe` for each group and parse the output.
Recently
n only loose one node just like with a three node
cluster.
Cheers,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?t
> T 0203 701 3177
>
>
> --
>
> Follow us on Twitter: @exchangelab <https://twitter.com/exchangelab> | Visit
> us on LinkedIn: The Exchange Lab
> <https://www.linkedin.com/company/the-exchange-lab>
>
>
>
>
>
--
Jens Rantil
Backe
e goal shouldn't be moving towards consul. It should just
> be
> > flexible enough for users to pick any distributed coordinated system.
> >
> >
> >
> >
> >
> >
> > On Mon, Sep 19, 2016 2:23 AM, Jens Rantil jens.ran...@tink.se
> >
;
>
>
>
> --
>
>
> Jennifer Fountain
> DevOPS
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
gt;> partitions or topics in Kafka and we’re unsure about this approach if we
> >> need to triple our event stream.
> >>
> >> We’re currently looking at 10,000 event streams (or topics) but we don’t
> >> want to be spinning up additional brokers just so we can ad
; balancing for better throughput), so if test publishes 40k messages only,
> only two partitions will actually get the data.
>
> Kind regards,
> Stevo Slavic.
>
> On Sun, Sep 11, 2016, 22:49 Jens Rantil wrote:
>
> > Hi,
> >
> > We have a partition which has many m
lag: 2 },
{ partition: 1, consumers: "192.168.1.2", lag: 2 },
{ partition: 2, consumers: "192.168.1.3", lag: 0 },
]
Clearly, it would be more optimial if "192.168.1.3" also takes care of
partition 1.
Cheers,
Jens
--
Jens Rantil
Backend engineer
Tink A
t day of our logs. If I reduce
topic retention to 3 days and brokers purge old logs, will consumer groups
automagically start consuming from the "new beginning" (that is, new
smallest offset)? This would save us some processing time...
Thanks,
Jens
--
Jens Rantil
Backend Developer
; > > > Snehalata
> > > >
> > > > - Original Message -
> > > > From: "Mudit Kumar" >
> > > > To: users@kafka.apache.org
> > > > Sent: Tuesday, May 24, 2016 3:53:26 PM
> > > > Subject: Re: Kafka e
t;
>
> Kind regards,
>
>
>
> Jahn Roux
>
>
>
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
se let
> me know if anyone knows.
>
>
> --
> *Regards,*
> *Ravi*
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/
t; upper limit ? Is there any other setting I am missing here ?
> I am myself controlling the message size so it's not that some bigger
> messages are coming through, each message must be around 200-300 bytes
> only.
>
> Due the large number of messages it is polling, the inner p
custom partitioner.
> I'd like to know how it was used to solve such data skew.
> We can compute some statistics on key distribution offline and use it in
> the partitioner.
> Is that a good idea? Or is it way too much logic for a partitioner?
> Anything else to consider?
&g
Hi,
When I added a replicated broker to a cluster, will it first stream
historical logs from the master? Or will it simply starts storing new
messages from producers?
Thanks,
Jens
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you
Kafka?
> >>
> >> Thanks,
> >> Dave
> >>
> >> This e-mail and any files transmitted with it are confidential, may
> contain sensitive information, and are intended solely for the use of the
> individual or entity to whom they are addressed. If
que
>- Ensure only one consumer for each topic is running to avoid
>re-balancing
>
>
>
> --
> -Richard L. Burton III
> @rburton
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
't think it is explained in the
> > JavaDoc
> >
> >
> >
> http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
> >
> > Thanks
> >
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
cause other nodes to fill up. Has that ever happened?
> Does Kafka have a contingency plan for such a scenario?
>
> Thank you so much for your insight and all of your hard work!
>
> Lawrence
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
Martin Kleppmann I would expect that someone
> had actually implemented some of the ideas they're been pushing. I'd also
> like to know what sort of problems Kafka would pose for long-term storage –
> would I need special storage nodes, or would replication be sufficient to
> ensur
Just making it more explicit: AFAIK, all Kafka consumers I've seen loads
the incoming messages into memory. Unless you make it possible to stream it
do disk or something you need to make sure your consumers has the available
memory.
Cheers,
Jens
On Fri, Mar 4, 2016 at 6:07 PM Cees de Groot wrote
e from a producer must be
> consumed by those kinds of consumers.
>
> Any advice and help would be really appreciated.
>
> Thanks in advance!
>
> Best regards
>
> Kim
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 1
gt; file modification stamps). This to me would indicate the above comment
> assertion is incorrect; we have encountered a non-ISR leader elected even
> though it is configured not to do so.
>
> Any ideas on how to work around this?
>
> Thank you,
>
> Tony Sparks
>
; Best Regards
> Munir Khan
>
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSR
ow it has something to do with the
> brokers, I would like to know what has happened and what is the best way to
> fix it?
>
> TIA
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.c
partitionsFor. If it can
> return partitioninfo it is considered live. Is this a good approach?
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.
Hi,
I suggest you run a micro benchmark and test it for your usecase. Should be
pretty straight forward.
Cheers,
Jens
–
Skickat från Mailbox
On Thu, Feb 11, 2016 at 4:24 PM, yazgoo wrote:
> Hi everyone,
> I have multiple disks on my broker.
> Do you know if there's a noticeable over
for consumer/ producer side, I'm responsible for
> message broker and after 2 years I will have to prove that message exists
> on MessageBroker and I can prove that using e.g. logs from this time. ( log
> should look like this : Message ID (from message key ) , timestamp )
>
>
&
Hi again,
A somewhat related question is also how the heartbeat interval and session
timeout relates to the poll timeout. Must the poll timeout always be lower
than the heartbeat interval?
Cheers,
Jens
On Monday, February 8, 2016, Jens Rantil wrote:
> Hi,
>
> I am trying to wra
nd a second heartbeat.
- Why can't session timeout simply be based on heartbeat interval?
Could anyone clarify this a bit? Also, if you are writing a new consumer,
what is your reasoning when setting these two value?
Thanks,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran..
ggestions?
>
> Thanks and Regards,
> Joe
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo
and a topic, when using the new Java consumer, how
can I figure out which partition the key will be written to? If not
possible, I will file a JIRA.
Thanks,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <ht
ition keys
occasionally make the Kafka cluster unbalanced etc.
On a larger perspective, maybe it would be nice if a consumer group would
occasionally rebalance consumers based on lag.
Cheers,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.ti
n the size of the topic and your replication factor.
> >
> >-- Jim
> >
> >On 1/20/16, 10:37 AM, "Josh Wo" wrote:
> >
> >>Hi Jens,
> >>I got your point but some of our use case cannot just rely on TTL. We try
> >>to have long expiry f
Hi Josh,
Kafka will/can expire message logs after a certain TTL. You can't simply rely
on expiration for key rotation? That is, you start to produce messages with a
different key while your consumer temporarily handles the overlap of keys for
the duration of the TTL.
Just an idea,
Jens
s to start with, later scaling it up to 10
> such devices.
>
> How should I model my topic? Should I create one topic per device?
>
> Thanks and Regards,
> Joe
>
> On Tue, Jan 19, 2016 at 4:58 PM, Jens Rantil wrote:
>
> > Hi Joe,
> >
> > I think you
n each node?
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A105
27;s start with a simple proposal: How about having a single topic with a
single partition and you replicate it to two other brokers? You can
increase the number of partitions in the future and you have your
"guaranteed" (whatever that means) replication.
Cheers,
Jens
--
Jens Rantil
Back
Hi,
You are correct. The others will remain idle. This is why you generally want to
have at least the same number of partitions as consumers.
Cheers,
Jens
–
Skickat från Mailbox
On Sat, Jan 16, 2016 at 2:34 AM, Jason J. W. Williams
wrote:
> Hi,
> I'm trying to make sure I understand
GB. We chose this configuration
> because our Hadoop cluster has that config and can easily handle that
> amount of data.
> 2. Having a bigger number of brokers but smaller broker config.
>
> I was hopping that somebody with more experience in using Kafka can advice
> on this.
>
Hi,
Why don't your consumers instead subscribe to a single topic used to broadcast
to all of them? That way your consumers and producer will be much simpler.
Cheers,
Jens
–
Skickat från Mailbox
On Fri, Dec 18, 2015 at 4:16 PM, Abel . wrote:
> Hi,
> I have this scenario where I need
ume will not
> read messages from first offset.
>
> Is there any way to reset kafka offset in zookeeper?
>
> Thanks,
> Akhilesh
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.
Hi,
In which part of the world?
Cheers,
Jens
–
Skickat från Mailbox
On Thu, Dec 17, 2015 at 8:23 AM, prabhu v
wrote:
> Hi,
> Can anyone provide me the link for the KAFKA USER Group meetings which
> happened on Jun. 14, 2012 and June 3, 2014??
> Link provided in the below wiki page is
> > while (running) {
> > ConsumerRecords records = consumer.poll(1000);
> > Future future = executor.submit(new Processor(records));
> > while (!complete(future, heartbeatIntervalMs, TimeUnit.MILLLISECONDS))
> > consumer.ping();
> > consumer.commitSyn
essages that I then collect on my first
`consumer.poll(0);` call? Since `consumer.poll(0);` then would return a
non-empty list, I would essentially ignoring messages? Or is the pause()
call both 1) making sure consumer#poll never returns anything _and_ 2)
pauses the background fetcher?
Cheers,
he high level consumer API? I mean, it sounds
like it should gracefully handle slow consumption of varying size. I might
be wrong.
Thanks,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/t
Hi again,
For the record I filed an issue about this here:
https://issues.apache.org/jira/browse/KAFKA-2986
Cheers,
Jens
–
Skickat från Mailbox
On Fri, Dec 11, 2015 at 7:56 PM, Jens Rantil wrote:
> Hi,
> We've been experimenting a little with running Kafka internally
ad tool for our usecase?
Thanks and have a nice weekend,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_com
48 matches
Mail list logo