Hi,
I have a use case where I want to schedule processing of events in the
future. I am not really sure if this a proper use of stream processing
application. But I was looking at KTable and kafka streams api to see if
this was possible.
So far the pattern I have is:
FEED -> changelog stream
Hi,
I have added the following configuration properties to a standalone Connect
server that is running a sink connector, but they are not getting picked up
by Connect:
max.partition.fetch.bytes=8192
max.message.bytes=8192
consumer.max.partition.fetch.bytes=8192
consumer.max.message.bytes=8192
In
Hi Shri,
I suggest you have a look at the protocol documentation and more
specifically at this section:
http://kafka.apache.org/protocol.html#protocol_partitioning
Regarding your question:
The consumer does the same logic as the producer to get started. It
sends a Metadata request to the bootstra
Hi,
Is there a way to configure a Kafka connector (or a Kafka Connect
server/cluster) to:
1. Receive a maximum of 1MB of data per second from a source Kafka
cluster, or
2. Receive a maximum of 1000 records per second from a source Kafka
cluster, or
3. Receive a maximum of 1MB of d
Yes, as per my understanding for now the only way to secure ZK is SASL.
https://issues.apache.org/jira/browse/ZOOKEEPER-2125 once released ZK could
also be secured using SSL.
Also remember not use any OS user, anyone user on n\w who can connect to ZK
host:port will be able to modify the ACLs.
Hi
As I understand the producer in Kafka connect to broker list first to fetch the
metadata. Procedure uses that data to directly connect to leader for partition
that it's trying to publish to.
>From my understanding of Kafka protocol and following other threads, the
>consumer so will be doing
>> Current kafka-acls.sh script directly contacts zookeeper to create ACLs.
Any OS user who got access to zookeeper can create ACLs for any Kafka
principal.
Thanks for that point. Appreciate it.
On Thu, Aug 31, 2017 at 8:29 AM, Manikumar
wrote:
> There is no correlation between OS user and Kafk
Ian,
You can try with the `toString` function (assuming you're on the older
version) of KafkaStreams to print the constructed topology and check if
multiple repartition topics is created.
>From your code snippet, it is a bit hard to tell, since I do not know
if repeatedInputStream
is already from
There is no correlation between OS user and Kafka Principal/Username.
Here user name refers to the principal associated with the kafka
communication
channel (Kerberos Principal, SASL/Plain username, Scram username, SSL
certificate)
Current kafka-acls.sh script directly contacts zookeeper to create
Right, I am. Just to be clear, I am using kafka-acl script to define/remove
ACLs as a non-super user and it just works fine. I had expected it to work
only for super users and not for regular users ('nex37045' is a normal
user).
[nex37045@or1010051029033 ~]$ kafka-acls --authorizer
kafka.security.
Looks like you are already using SASL/PLAIN mechanism. Kafka supports SASL
authentication framework.
KAFKA SASL supports GSSAPI (Kerberos), PLAIN or SCRAM mechanisms. you can
enable SSL encryption also
http://kafka.apache.org/documentation.html#security
On Thu, Aug 31, 2017 at 7:28 PM, Manoj Mur
Thanks Manikumar. I am testing the setup documented here:
https://developer.ibm.com/opentech/2017/05/31/kafka-acls-in-practice/
(SASL_PLAINTEXT).
I haven't setup any authentication for the tests. Thinking about it,
authentication is a must have for authorization (so, kafka knows who's
making resou
Have you tried increasing max.in.flight.requests.per.connection? I wonder if
that would be similar to you having multiple producers.
Dave
Sent using OWA for iPhone
From: Sunny Kim
Sent: Wednesday, August 30, 2017 4:55:02 PM
To: users@kafka.apache.org
Su
Kafka Brokers only. Clients were Java client that used the same client
version as the broker.
On Thu, Aug 31, 2017 at 5:43 AM, Saravanan Tirugnanum
wrote:
> Thank you Raghav. Was it like you upgraded Kafka Broker or Clients or both.
>
> Regards
> Saravanan
>
> On Wednesday, August 30, 2017 at 6:
Thank you Raghav. Was it like you upgraded Kafka Broker or Clients or both.
Regards
Saravanan
On Wednesday, August 30, 2017 at 6:31:34 PM UTC-5, Raghav wrote:
>
> I was never able to debug this exception. I, unfortunately, moved to
> Apache Kafka 10.2.1 from Confluent 3.2.1 and this issue went a
Hi Guozhang,
Looking at this again this again I'm a little confused, I'm not using any
maps and as far as I know, the selectKey is already causing a repartition:
"base-topic-KSTREAM-KEY-SELECT-03-repartition".
Is using the same KTable in multiple different leftJoins going to be an
issue?
Hi,
My connector JAR files contains log4j.properties inside the root package
but when I start Kafka Connect I get the following error message:
log4j:WARN No appenders could be found for logger
(com.example.ExampleConnector).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See h
17 matches
Mail list logo