Hi Guozhang,
Thanks for the reply.
By taking a lot of time I meant that I see a log message `Restoring state
from changelog topics `, followed by just some kafka consumer logs like
`Discovered coordinator`. Looking at this I assumed that the Stream
threads are waiting for the states to be r
Can you share more info on this.
As per the rocksb doc(
https://github.com/facebook/rocksdb/wiki/basic-operations)
Within a single process, the same rocksdb::DBobject may be safely shared by
multiple concurrent threads. I.e., different threads may write into or
fetch iterators or call Get on the
Hi Anish,
1. The committed offsets are for the messages that have been through the
entire pipeline. Plus when we are committing, we make sure all state store
caches are flushed so there should be no messages that are "in the middle
of the topology". If there is a failure before the commit, then so
Hello,
We have a short term need to mirror a topic from non-secure
kafka-0.8.2.1 to a secure kafka-0.10.2.0 cluster. Just wondering if anyone
has done this successfully. My initial attempt failed with auth/kerberos
errors. Please advise.
Thanks,
Rob
If you've noticed the default values of the above configuration, it's
Long.MAX_VALUE.
This is set to discourage the users not to edit / re-configure it. The
above configuration
is to flush the messages from the cache to the disk (fsync). Kafka
delegates the task of
flushing the messages to disk to
During my Kafka installation, I got some questions with some of the
parameter configurations
I see that log.flush.interval.messages and log.flush.interval.ms are
commented out in the default kafka server properties file. I read two
conflicting statements about these parameters. In one place, I r
Sameer, the log you attached doesn't contain the logs *before* the
exception happened.
On Tue, 15 Aug 2017 at 06:13 Sameer Kumar wrote:
> I have added a attachement containing complete trace in my initial mail.
>
> On Mon, Aug 14, 2017 at 9:47 PM, Damian Guy wrote:
>
> > Do you have the logs le
Hello
I¹ve been asked to upgrade kafka (2.10.8.2.0) and zookeeper (3.4.8). Is
Kafka 11 and Zookeeper 3.4.10 compatible? Are there some gotchas?
Thanks
Carmen
Hi,
We use Kafka with Spark streaming, where we consume the messages every few
mins.
Can we also use Kafka to store messages for longer duration in hours; The
usecase is for a spark batch job that runs every few hours to consume
messages from Kafka topic.
Is it advisable to use Kafka for such us
HI Guozhang,
Thanks for your swift feedback.
Using your "Pipe App" example might actually be a neat work-around.
I'll see if I can work out a simple prototype for this on our platform.
The only downside of this is that I will double the message-load on the
platform (from source-topics to processi
Got some info on this, I think this is mostly controlled by acks.
The default value of “1” requires an explicit acknowledgement from the
partition leader that the write succeeded. The strongest guarantee that
Kafka provides is with “acks=all”, which guarantees that not only did the
partition leade
11 matches
Mail list logo