> > >
> >
> https://github.com/dropwizard/metrics/blob/master/metrics-core/src/main/java/com/codahale/metrics/Meter.java#L36
> > > > >> >> > >
> > > > >> >> > > Not sure if it was mentioned in this (or some recent)
> thread
> > > but
porting common
> > > reporters
> > > >> > (as listed in KAFKA-1930), and (2) see if the current histogram
> > > support
> > > >> is
> > > >> > good enough for measuring things like request time.
> > > >> >
> &
t; > aaurad...@linkedin.com.invalid> wrote:
> > >> >
> > >> >> If we do plan to use the network code in client, I think that is a
> > good
> > >> >> reason in favor of migration. It will be unnecessary to have
> metrics
> > &g
<
> >> > aaurad...@linkedin.com.invalid> wrote:
> >> >
> >> >> If we do plan to use the network code in client, I think that is a
> good
> >> >> reason in favor of migration. It will be unnecessary to have metrics
> >> from
> &g
o start monitoring
>> >> these new metrics anyway.
>> >>
>> >> I also agree with Jay that in multi-tenant clusters people care about
>> >> detailed statistics for their own application over global numbers.
>> >>
>> >> Base
t; >>
> >> I also agree with Jay that in multi-tenant clusters people care about
> >> detailed statistics for their own application over global numbers.
> >>
> >> Based on the arguments so far, I'm +1 for migrating to KM.
> >>
> >> Thanks,
> &g
;> detailed statistics for their own application over global numbers.
>>
>> Based on the arguments so far, I'm +1 for migrating to KM.
>>
>> Thanks,
>> Aditya
>>
>> ________________
>> From: Jun Rao [j...@confluent.io
migrating to KM.
>
> Thanks,
> Aditya
>
>
> From: Jun Rao [j...@confluent.io]
> Sent: Sunday, March 29, 2015 9:44 AM
> To: dev@kafka.apache.org
> Subject: Re: Metrics package discussion
>
> There is another thing to consider. We plan to reuse the client componen
: dev@kafka.apache.org
Subject: Re: Metrics package discussion
There is another thing to consider. We plan to reuse the client components
on the server side over time. For example, as part of the security work, we
are looking into replacing the server side network code with the client
network code (KAFKA
rray would get
> copied
> > twice which is pretty bad if we called each metric on every MBean.
> > >
> > > Another point Joel mentioned is that codahale metrics are harder to
> > write tests against because we cannot pass in a Clock.
> > >
> > > IMO, if a library is preventing us from adding a
e should replace it. It
> might be short term pain but in the long run we will have more useful
> graphs.
> > What do people think? I can start a vote thread on this once we have a
> couple more opinions.
> >
> > Thanks,
> > Aditya
> > _
think? I can start a vote thread on this once we have a
> couple more opinions.
> >
> > Thanks,
> > Aditya
> >
> > From: Jay Kreps [jay.kr...@gmail.com]
> > Sent: Thursday, March 26, 2015 2:29 PM
> > To: dev@kaf
start a vote thread on this once we have a couple
> more opinions.
>
> Thanks,
> Aditya
>
> From: Jay Kreps [jay.kr...@gmail.com]
> Sent: Thursday, March 26, 2015 2:29 PM
> To: dev@kafka.apache.org
> Subject: Re: Metrics package di
]
Sent: Thursday, March 26, 2015 2:29 PM
To: dev@kafka.apache.org
Subject: Re: Metrics package discussion
Yeah that is a good summary.
The reason we don't use histograms heavily in the server is because of the
memory issues. We originally did use histograms for everything, then we ran
int
server, but they are both pulled in as
> dependencies anyway. Using this for metrics we can quota on may not be a
> bad place to start.
>
> Thanks,
> Aditya
> ________
> From: Jay Kreps [jay.kr...@gmail.com]
> Sent: Wednesday, March 25, 2015 11:08 PM
> To:
the server, but they are both pulled in as dependencies
anyway. Using this for metrics we can quota on may not be a bad place to start.
Thanks,
Aditya
From: Jay Kreps [jay.kr...@gmail.com]
Sent: Wednesday, March 25, 2015 11:08 PM
To: dev@kafka.apache.org
Subje
Here was my understanding of the issue last time.
The yammer metrics use a random sample of requests to estimate the
histogram. This allocates a fairly large array of longs (their values are
longs rather than floats). A reasonable sample might be 8k entries which
would give about 64KB per histogra
Aditya,
If we are doing a deep dive, one of the things to investigate would be
memory/GC performance. IIRC, when I was looking into codahale at LinkedIn,
I remember it having quite a few memory management and GC issues while
using histograms. In comparison, histograms in the new metrics package
ar
Hey everyone,
Picking up this discussion after yesterdays KIP hangout. For anyone who did not
join the meeting, we have 2 different metrics packages being used by the
clients (custom package) and the server (codahale). We are discussing whether
to migrate the server to the new package.
What in
19 matches
Mail list logo