Thanks Steve. So i shouldn't worry about increasing the RAM at this moment.
Also do you think that the system resource are OK to handle the workload.
3 server - 2 core processor and 8 GB Ram. Retention policy is set to 168
hours.
Thanks,
Parth Gandhi
DevOps
On Tue, Jun 18, 2019 at 12:
400+ partition spread
across these servers.
Thanks,
Parth Gandhi
DevOps
Disclaimer
The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and others
authorized to receive it. If you are not the recipient, you are hereby notified
t
do we
ensure that all server are running smoothly? Do i need to reassing the
partition (kafka-reassign-parition cmd)
[image: image.png]
Thanks,
Parth Gandhi
DevOps
Disclaimer
The information contained in this communication from the sender is
confidential. It is intended solely for use by
Gandhi
DevOps
On Thu, Feb 21, 2019 at 1:02 PM Parth Gandhi <
parth.gan...@excellenceinfonet.com> wrote:
> Hi,
>
> We have been running kafka since quite some time now and have come across
> an issue where the consumers are not reporting to the consumer group. This
> is h
from
the consumer server on port 9092.
[image: image.png]
Thanks,
Parth Gandhi
DevOps
Disclaimer
The information contained in this communication from the sender is
confidential. It is intended solely for use by the recipient and others
authorized to receive it. If you are not the recipient
Thatns Robin. Can you send reference to it? Also can it be used for log
file stored locally (not in DB)?
Thanks,
Parth Gandhi
DevOps
On Thu, Jan 10, 2019 at 9:17 AM Robin Moffatt wrote:
> You can use kafkacat to examine the timestamp (and other metadata). Here's
> an example of
Hi,
Does kafka record the timestamp of the incoming message in its data log? I
checked one of the partition log and I can see the message without any
timestamp. Also there are few special characters in the message log. IS
that normal?
Here is a sample log: pastebin.com/hStyCW13
Thanks,
Parth
Team,
We want to build a scalable kafka system for pub sub message and want to
run consumers (500+) on docker. We want the system to scale up the consumer
based on the message inflow. However in kafka this triggers a rebalancing
and we fear loss of message.
What is the best practices/way to achieve
? I randomly checked few logs and did not see it. Below is a screen
capture of one of my partition. What are these NULL values?
[image: image.png]
Thanks,
Parth Gandhi
DevOps
Disclaimer
The information contained in this communication from the sender is
confidential. It is intended solely for
rebalancing will occur automatically?
Thanks,
Parth Gandhi
DevOps
On Thu, Dec 6, 2018 at 4:56 AM Manikumar wrote:
> Hi Parth,
>
> We need to pass JVM system properties to rotate gc logs. These props added
> in Kafka 0.11.release.
> For releases before 0.11, we need to add these JV
a kafka
cluster with 3 broker and will have to set this up on all three servers. I
am new to Java and any help on this is highly appreciated.
Thanks,
Parth Gandhi
DevOps
Disclaimer
The information contained in this communication from the sender is
confidential. It is intended solely for use by
Hi Team,
We have implemented kafka and would like to know if there is a way to
remove kafka message stuck in a Queue and commit offset using the kafka
command line. Or has it to be done using the consumer API.
Thanks,
Parth Gandhi
DevOps
Disclaimer
The information contained in this
12 matches
Mail list logo