Hello
I am quite new to KAFKA and come from a JMS/messaging background. Reading
through the documentation, I gather using partitions and consumer groups,
KAFKA achieves both P2P and pub/sub. I have a few questions on partitions,
though, I was wondering someone could kindly please point me in the r
Hi Josh,
1. I don't know for sure (haven't seen the code that does it), but it's
probably the most "even" split possible for given number of brokers and
partitions. So for 8 partitions and 3 brokers it would be [3, 3, 2].
2. See "num.partitions" in broker config. BTW. only producer can create
topi
Thanks Matthias !
On Thu, Oct 5, 2017 at 12:16 AM, Matthias J. Sax
wrote:
> That is hard to do...
>
> Just deleting the topic might result in data loss, if not all data was
> processed by the application yet (note, that repartitioning topics are
> also kind of a buffer between subtopologies).
>
I think you can do this now by using a custom partitioner, no?
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/producer/Partitioner.html
-Jay
On Mon, Oct 2, 2017 at 6:29 AM Michal Michalski
wrote:
> Hi,
>
> TL;DR: I'd love to be able to make log compaction more "granular" than j
Not sure if I have missed it in the streams developer docs, but, is there a
mechanism to age data off the state store similar to a key based TTL? It
looks like RocksDB has TTL built in so would I pass that via some store
configuration?
Kris
Hi all,
we were testing Kafka cluster outages by randomly crashing broker nodes (1
of 3 for instance) while still keeping majority of replicas available.
Time to time our kafka-stream app crashing with exception:
[ERROR] [StreamThread-1]
[org.apache.kafka.streams.processor.internals.StreamThread
Looking
at
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java
:
throw new UnsupportedOperationException("Change log is not
supported for store " + this.name + " since it is TTL based.");
// TODO: support TTL with change log?
Hi
Have you set replication.factor and retries properties?
BR
tors 5 okt. 2017 kl. 18:45 skrev Dmitriy Vsekhvalnov :
> Hi all,
>
> we were testing Kafka cluster outages by randomly crashing broker nodes (1
> of 3 for instance) while still keeping majority of replicas available.
>
> Time to time
Past thread related to TTL:
http://search-hadoop.com/m/Kafka/uyzND1RLg4VOJ84U?subj=Re+Streams+TTLCacheStore
On Thu, Oct 5, 2017 at 9:54 AM, Ted Yu wrote:
> Looking at
> streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java
> :
>
> throw new Unsupporte
replication.factor set to match source topics. (3 in our case).
what do you mean by retires? I don't see retries property in StreamConfig
class.
On Thu, Oct 5, 2017 at 7:55 PM, Stas Chizhov wrote:
> Hi
>
> Have you set replication.factor and retries properties?
>
> BR
>
> tors 5 okt. 2017 kl. 1
Thank you, Michal.
That answers all my questions, many thanks.
Josh
On Thu, Oct 5, 2017 at 1:21 PM, Michal Michalski <
michal.michal...@zalando.ie> wrote:
> Hi Josh,
>
> 1. I don't know for sure (haven't seen the code that does it), but it's
> probably the most "even" split possible for given n
It is a producer config:
https://docs.confluent.io/current/streams/developer-guide.html#non-streams-configuration-parameters
2017-10-05 19:12 GMT+02:00 Dmitriy Vsekhvalnov :
> replication.factor set to match source topics. (3 in our case).
>
> what do you mean by retires? I don't see retries prop
I see, but producer.retries set to 10 by default.
What value would you recommend to survive random broker crashes?
On Thu, Oct 5, 2017 at 8:24 PM, Stas Chizhov wrote:
> It is a producer config:
> https://docs.confluent.io/current/streams/developer-
> guide.html#non-streams-configuration-paramet
I would set it to Integer.MAX_VALUE
2017-10-05 19:29 GMT+02:00 Dmitriy Vsekhvalnov :
> I see, but producer.retries set to 10 by default.
>
> What value would you recommend to survive random broker crashes?
>
> On Thu, Oct 5, 2017 at 8:24 PM, Stas Chizhov wrote:
>
> > It is a producer config:
> >
Have yu got a chance to help to me on this issue ?
Thanks
Saravanan
From: "Kannappan, Saravanan (Contractor)"
Date: Thursday, September 21, 2017 at 7:11 PM
To: "users@kafka.apache.org"
Subject: Kafka Mirror Maker Automation
Team,
Please help me to set it up the Kafka Mirror Maker automation f
Ok, we can try that.
Some other settings to try?
On Thu, Oct 5, 2017 at 20:42 Stas Chizhov wrote:
> I would set it to Integer.MAX_VALUE
>
> 2017-10-05 19:29 GMT+02:00 Dmitriy Vsekhvalnov :
>
> > I see, but producer.retries set to 10 by default.
> >
> > What value would you recommend to survive
Hi,
The documentation on MirrorMaker is available on the website:
https://kafka.apache.org/documentation/#basic_ops_mirror_maker
Did you have a specific question or are you having a specific problem? If
so, do post the details.
Cheers,
Tom
On 5 October 2017 at 19:03, Kannappan, Saravanan (Cont
Typically you don't want replication throttling enabled all the time as if
a broker drops out of the isr for whatever reason catch-up will be impeded.
Having said that, this may not be an issue if the throttle is quite mild
and your max write rate is well below your the network limit, but it is
saf
Michal,
You mentioned topics are only dynamically created with producers. Does that
mean if a consumer starts on a non-existent topic, it throws an error?
Kind regards
Meeraj
On Thu, Oct 5, 2017 at 9:20 PM, Josh Maidana wrote:
> Thank you, Michal.
>
> That answers all my questions, many thanks
Hello
We are integrating KAFKA with an AKKA system written in Scala. Is there a
Scala API available for KAFKA? Is the best option to use AKKA KAFKA Stream?
--
Kind regards
*Josh Meraj Maidana*
Hello,
Apologies, if this is not the right forum to ask this question. With the
AKKA consumer code below, does it start the number of consumers in
parallel, as specified by the argument of the mapAsync call?
def consume() = {
val consumer = Consumer.committableSource(consumerSettings,
Subscript
21 matches
Mail list logo