The issue might be due to
https://unix.stackexchange.com/questions/343353/ps-only-prints-up-to-4096-characters-of-any-processs-command-line
I guess the issue is with kafka version >0.10.0.
More details:
https://github.com/apache/kafka/pull/2515
Regards,
Ravi
On Tue, May 9, 2017 at 12:01 PM, Ved
Is it possible to restrict Kafka consumers from consuming from a given
Kafka cluster?
--
*Regards,*
*Ravi*
./bin/kafka-consumer-groups.sh --group batchprocessord_zero
--bootstrap-server kafka-1-evilcorp.com:9092 --new-consumer --describe
Running the above ConsumerGroupcommad will describe consumer for all the
topics it's listening to.
Is there any workaround to get *only topic level detail*?
--
*
I used .*/gradlew jarAll* but still scala libs are missing from the jar?
It should be something very simple which I might be missing. Please let
me know if anyone knows.
--
*Regards,*
*Ravi*
I was writing Kafka consumer and I have a query related to consumer
processes.
I have a consumer with groupId="testGroupId" and using the same groupId I
consume from multiple topics say, "topic1" and "topic2".
Also, assume "topic1" is already created on broker whereas "topic2" is not
yet created.
Never mind. I found the issue.
Thanks,
Ravi
On Fri, Oct 10, 2014 at 11:47 AM, ravi singh wrote:
> In ProducerPerformance class we use CSVMetricsReporter for metrics
> reporting.
> Which I think is actually started with the help of below function:
> KafkaMetricsReporter.st
In ProducerPerformance class we use CSVMetricsReporter for metrics
reporting.
Which I think is actually started with the help of below function:
KafkaMetricsReporter.startReporters(verifiableProps)
Similarly I wrote my own producer and I have a custom implementation of
KafkaMetricsReporter.
But to
I have few questions regarding Kafka Consumer.
In kafka properties we only mention the zookeeper ip's to connect to.
But I assume the consumer also connects to Kafka broker for actually
consuming the messages.
We have firewall enabled on ports, so in order to connect from my consumer
I need to op
Even though I am able to ping to the broker machine from my producer
machine , the producer is throwing below expcetion while connecting to
broker.
I wanted to increase time out for producer but couldnt find any parameter
for that in kafka 8.
Any idea whats wrong here?
[2014-10-08 09:29:47,762] ER
Kafka 07 has following property for producer.
connect.timeout.ms5000the maximum time spent
bykafka.producer.SyncProducer trying
to connect to the kafka broker. Once it elapses, the producer throws an
ERROR and stops.
But when i checked in Kafka 08 config , I couldn't find any such property.
Is it
The MaxLag mbean is only valid for an active consumer. So while the
consumer is actively running, it should be accurate.
On Fri, Oct 3, 2014 at 4:21 AM, Shah, Devang1 wrote:
> Hi,
>
> Referring to http://kafka.apache.org/documentation.html#java
>
>
> Number of messages the consumer lags behind t
It is available with Kafka package containing the source code. Download
the package, build it and run the above command.
Regards,
Ravi
On Wed, Oct 1, 2014 at 7:55 PM, Sa Li wrote:
> Hi, All
>
> I built a 3-node kafka cluster, I want to make performance test, I found
> someone post following th
using a replication factor of two? As
> Steven said, the replicas also consume from the leader. So it's your
> consumer, plus the replica.
>
> On Thu, Sep 25, 2014 at 10:04:29PM -0700, ravi singh wrote:
> > Thanks Steven. That answers the difference in Bytes in and bytes Out per
&
27;t see your graph. but your replicator factor is 2. then replication
> traffic can be the explanation. basically, BytesOut will be 2x of BytesIn.
>
> On Thu, Sep 25, 2014 at 6:19 PM, ravi singh wrote:
>
> > I have set up my kafka broker with as single producer and consumer. Wh
I have set up my kafka broker with as single producer and consumer. When I
am plotting the graph for all topic bytes in/out per sec i could see that
value of BytesOutPerSec is more than BytesInPerSec.
Is this correct? I confirmed that my consumer is consuming the messages
only once. What could be
These are certain Mbeans which will be available at consumer end.
Regards,
Ravi
On Mon, Jul 28, 2014 at 3:28 PM, 이재익 wrote:
> Hello,
>
>
> I have same problem. Do you have any update about for 2)?
>
> I'm using kafka 0.8.1.1 and replication factor is 2.
>
>
>
> In my case, I installed "Kafka G
s not being committed, it means that when you stop that
> consumer and another one picks up the partition, it will start from the
> last committed offset, which means it¹s going to duplicate a lot of
> messages.
>
> -Todd
>
>
> On 7/26/14, 8:03 AM, "ravi singh" wrote:
The Max lag Mbean is defined as "Number of messages the consumer lags
behind the producer". Now when I read the Mbean value it give me the count
as 0 (and occasionally some value like 130 or 340 )
ConsumerFetcherManager.test-consumer-group-MaxLag count = 0
But when I use the kafka.tools.Co
hence do not really know the number of raw messages out.
> It can be roughly estimated as 'BytesOut/MessageSize'.
>
>
> On Fri, Jul 11, 2014 at 10:28 AM, ravi singh wrote:
>
> > Thanks Guozhang!!
> > Just checked it after publishing few messages , "
gesInPerSec" also exists per-topic, but the sensor will only be
> created when there are messages produced to the brokers with this topic
> though.
>
>
> On Fri, Jul 11, 2014 at 9:20 AM, ravi singh wrote:
>
> > Couldn't find it in the documentation though.
> >
is indeed a per-topic metric for bytes in and bytes out. Should be
> named as [TopicName]BytesInPerSec, etc.
>
> Server does monitor bytes out rate whenever it sends any responses to
> consumer/producer clients.
>
> Guozhang
>
>
>
>
> On Fri, Jul 11, 2014 at 3:38 AM, ra
Hi Kafka Users,
Is it possible to monitor the number of messages in and out for per topic
on broker side.
Their is a MBean for "AllTopicsMessagesInPerSec" but I couldn't find
anything on per topic basis.
Also what does the "Byte out rate" MBean indicates , because as per my
understanding only t
are deleted by broker after specified limit
+ Broker health : memory usage
Regards,
Ravi
On Tue, Jun 24, 2014 at 11:11 AM, Neha Narkhede
wrote:
> What kind of broker metrics are you trying to push to this centralized
> logging framework?
>
> Thanks,
> Neha
> On Jun 23, 2014 8:
the JMS
> protocol and hence provide all sorts of hooks and plugins on the brokers at
> the cost of performance.
>
> Could you elaborate more on your use case? There is probably another way to
> model your application using Kafka.
>
> Thanks,
> Neha
>
>
> On S
I want to use some vital broker stats into a different logging system.
Can i read the broker specific data from zookeeper?
--
*Regards,*
*Ravi*
How do I intercept Kakfa broker operation so that features such as
security,logging,etc can be implemented as a pluggable filter. For example
we have "BrokerFilter" class in ActiveMQ , Is there anything similar in
Kafka?
--
*Regards,*
*Ravi*
26 matches
Mail list logo