Hi
What would be an ideal block size for a disk that has kafka logs and why?
I’m tempted to use a very high value like 64k to enhance sequential reads,
but I have no idea if it will be good.
(I’m also using the XFS has my disk format)
Thanks for the help!
Stephane
Hello Jon,
It is hard to tell, since I cannot see how is your Aggregate() function is
implemented as well.
Note that the deserializer of transactionSerde is used in both `aggregate`
and `KstreamBuilder.stream`, while the serializer of transactionSerde is
only used in `aggregate`, so if you suspec
Hi Ofir,
The config TimstampExtractor expects only the class name and an instance
will be created via reflection, and users cannot pass in an instance
directly via the config.
One way to fix it, is to let TimestampExtractor extend Configurable and
hence the created instance will auto trigger a us
Hello,
Using kafka 0.8.2.1 with reactive-kafka wrapper around Java client we
observe that some consumers in a group are getting stuck every once in a
while. One characteristic of these stuck consumers is that (once restarted)
they are attached to partitions with expired offsets, i.e. I see the
fol
Thanks!! Does the upgrade help?
On 29 December 2016 at 21:38, Tony Liu wrote:
> hi,
>
> you are hitting this issue ,
> https://issues.apache.org/jira/browse/KAFKA-4477
>
> On Wed, Dec 28, 2016 at 3:43 PM, Alessandro De Maria <
> alessandro.dema...@gmail.com> wrote:
>
> > Hello,
> >
> > I would
hi,
you are hitting this issue ,
https://issues.apache.org/jira/browse/KAFKA-4477
On Wed, Dec 28, 2016 at 3:43 PM, Alessandro De Maria <
alessandro.dema...@gmail.com> wrote:
> Hello,
>
> I would like to get some help/advise on some issues I am having with my
> kafka cluster.
>
> I am running kaf
Hi,
I just found a reported issue,
https://issues.apache.org/jira/browse/KAFKA-4477, hopefully, it's useful
for you.
On Thu, Dec 29, 2016 at 12:08 PM, Tony Liu wrote:
> Hi Thomas or Anyone,
>
> I also encountered the same issue like you reported, the only workaround
> is to restart that broke
Hi Thomas or Anyone,
I also encountered the same issue like you reported, the only workaround is
to restart that broken node, but I did not find root cause how to solve it
right now, so I wonder do you have some progress how to solve that issue
right now?
i.e, at the beginning, I thought this iss
The best you can do to ensure ordering today is to set:
acks = all
retries = Integer.MAX_VALUE
max.block.ms = Long.MAX_VALUE
max.in.flight.requests.per.connection = 1
This ensures there's only one outstanding produce request (batch of
messages) at a time, it will be retried indefinitely on retria
Angular is a frontend tool and Kafka is almost always used on the backend
behind some application layer, so integration would be pretty uncommon. If
you did want to write or read from Kafka directly from your frontend app,
you'd probably want to use a REST proxy (e.g. http://docs.confluent.io/3.1.
Hi Felipe,
It looks like the fetch response may, in some cases, contain a null
ByteBuffer for a partition instead of the expected empty byte buffer. This
code changed a lot in trunk so it may have already been fixed. Any chance
you could test trunk to see if the problem persists? In any case, plea
Hi Dongjin,
1. I'm not familiar with `KafkaCSVMetricsReporter`, but it seems to delete
and recreate the csv dir before starting `CsvReporter` which then creates
the csv files. Assuming the permissions are correct, not sure why it would
fail to create the files. It may be worth filing a JIRA (and p
12 matches
Mail list logo