Hello Experts, We want to distribute data across partitions in Kafka
Cluster.
Option 1 : Use Null Partition Key which can distribute data across
paritions.
Option 2 : Choose Key ( Random UUID ? ) which can help to distribute data
70-80%.
I have seen below side effect on Confluence Page about se
Hello Senthil,
In our case we use NULL as message Key to achieve even distribution in
producer.
With that we were able to achieve very even distribution with that.
Our Kafka client version is 0.10.1.0 and Kafka broker version is 1.1
Thanks,
Gaurav
On Wed, Aug 29, 2018 at 9:15 AM, SenthilKumar K
Thanks Gaurav. Did you notice side effect mentioned in this page :
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyisdatanotevenlydistributedamongpartitionswhenapartitioningkeyisnotspecified
?
--Senthil
On Wed, Aug 29, 2018 at 2:02 PM Gaurav Bajaj wrote:
> Hello Senthil,
>
> In o
Hi, I would like to try a setup of SASL/OAUTHBEARER where:
1. a kafka client obtains an OAUTHBEARER token from an authorization server
2. the kafka client send the OAUTHBEARER token to kafka
3. kafka validates the OAUTHBEARER token and authenticates the user
Is anybody aware if thi
Hi,
I’m using the Kafka lib with version 2.11_1.0.1.
I use the KafkaServer.scala class to programmatically create a Kafka instance
and connect it to a programmatically created Zookeeper instance. It has the
following properties:
host.name", "127.0.0.1"
"port", "0"
"zookeeper.connect", "127.0.0.1
Why can't we override the DefaultPartitioner, and simply override
paritition() method, such that it will redistribute to all partitions in
round robin fashion.
Round-Robin partitioner and StickyAssignor (consumer) should work nicely
for any publish subscribe system.
On Wed, 29 Aug 2018 at 09:39,
Hello,
Could I get permission to create KIP on cwiki? my username(email) is
manme...@gmail.com
Thanks,
Can you extend the auto.commit.interval.ms to 5000 ? and retry? Also, why
is your port set to 0?
Regards,
On Wed, 29 Aug 2018 at 14:25, Cristian Petroaca
wrote:
> Hi,
>
> I’m using the Kafka lib with version 2.11_1.0.1.
> I use the KafkaServer.scala class to programmatically create a Kafka
> in
Port = 0 means Kafka will start listening on a random port which I need.
I tried it with 5000 but I get the same result.
On 29/08/2018, 16:46, "M. Manna" wrote:
Can you extend the auto.commit.interval.ms to 5000 ? and retry? Also, why
is your port set to 0?
Regards,
On
So have you tried binding it to 9092 rather than randomising it, and see if
that makes any difference?
On Wed, 29 Aug 2018 at 15:41, Cristian Petroaca
wrote:
> Port = 0 means Kafka will start listening on a random port which I need.
> I tried it with 5000 but I get the same result.
>
>
> On 29/0
Tried it, same problem with 9092.
By the way, the same consumer works with a remote 1.0.1 Kafka broker with the
same config.
There doesn’t seem to be any networking issues with the embedded one since the
consumer successfully sends Find Coordinator messages to it and the broker
responds with Coo
Does the topic exist in both your programmatic broker and remote broker?
Also, are the topic settings same for partitions and replication factor?
GROUP_COORDINATOR_NOT_AVAILABLE is enforced as of 0.11.x if the
auto-created topic partition/replication-factor setup doesn't match
with server's confi
Hello,
I saw your username is already granted on wiki space.
Guozhang
On Wed, Aug 29, 2018 at 6:38 AM, M. Manna wrote:
> Hello,
>
> Could I get permission to create KIP on cwiki? my username(email) is
> manme...@gmail.com
>
> Thanks,
>
--
-- Guozhang
Satarupa,
In my experience kafka has a 10k partition limit per topic. I don't think
you are going to be able to get 1 million partitions to work on a single
topic. A consumer would need to subscribe to listen to a kafka topic. You
will probably need to have multiple consumer groups or multiple top
Satarupa, it sounds like you are conflating some concepts here. Some
clarifying points:
- Only one consumer in a consumer group receives any given record from a
topic. So in your scenario of 1 million consumers, they could not be
members of the same group. You'd need 1 million consumer "groups" to
Thanks, just realised that. Will work on it.
On Wed, 29 Aug 2018 at 17:23, Guozhang Wang wrote:
> Hello,
>
> I saw your username is already granted on wiki space.
>
>
> Guozhang
>
> On Wed, Aug 29, 2018 at 6:38 AM, M. Manna wrote:
>
> > Hello,
> >
> > Could I get permission to create KIP on cwi
Hi Ryanne,
Thank you so much for detailed explanation.
Here is couple of more asks -
1) Here Consumers are not short lived but we want to listen to the message and
become idle. Is there a way to notify the Kafka server that message is reached
to Consumer? So that post which server does not pr
Hi all,
I'm testing a case when producer have unmatched security setting with broker,
the kafka broker set with SASL_SSL and sasl.mechanism = GSSAPI, but producer
client set with PLAINTEXT. the kafka is v1.0. Instead of throw exception when
call send(), every send() return with about 1 min dela
Satarupa,
Glad I could help, and thanks for the additional context. It doesn't sound
like this use-case requires real-time notification. Why not just poll a web
service periodically, say every 5 minutes?
If you need something more real-time, I'd suggest using a more traditional
publish-subscribe
Hi Ryanne,
Yes, using WebSocket/REST API call is the another way to achieve as you
mentioned.
But we wanted to check if Kafka can be also considered here.
We wanted to have PUSH model, where the moment message is published, consumer
listens it and as KAFKA can do load balancing automatically, w
Hi, I would like to try a setup of SASL/OAUTHBEARER where:
1. a kafka client obtains an OAUTHBEARER token from an authorization server
2. the kafka client send the OAUTHBEARER token to kafka
3. kafka validates the OAUTHBEARER token and authenticates the user
Is anybody aware if thi
21 matches
Mail list logo