+1 (non binding)
Checked streams. Verified that stream tests work and examples off
confluentinc/examples/kafka-streams work.
Thanks
Eno
> On 10 Feb 2017, at 16:51, Ewen Cheslack-Postava wrote:
>
> Hello Kafka users, developers and client-developers,
>
> This is RC1 for release of Apache Kafk
For the K, we use a simple StringSerde, for the V, we use a custom Serde
which translates an avro payload into a generic bean containing an
identifier, a version and an Avro record.
On Sun, Feb 12, 2017 at 10:39 PM, Guozhang Wang wrote:
> Pierre,
>
> Could you let me know what serdes do you use
If Im doing a KStream.leftJoin(KTable) how would I set this configuration
for just the KTable portion?
IE I have
KStream = KStreamBuilder.stream()
KTable = KStreamBuilder.table()
...
(join occurs.. data flows.. ppl are brought closer together.. there is
peace in the valley.. for me... )
...
Kaf
Christopher,
SSL client authentication is currently disabled when SASL_SSL is used, so
it is not possible to use client certificate credentials with SASL_SSL. Are
you expecting to authenticate clients using certificates as well as using
SASL? Or do you just need some mechanism to get hold of the c
Hi,
I am a colleague of Ian's. We use the following processing pipeline in
stream app he mentions:
https://github.com/zalando-incubator/pipeline-backbone
The streams are built using:
object Run extends App {
// ...
private val latch = new CountDownLatch(1)
private val builder = {
val
Hi Jon,
If I understand your question correctly:
- any new KTables created by the DSL will automatically get the right policy.
You don't need to do anything special.
- otherwise you'll have to set the policy on the Kafka topic.
Eno
> On 13 Feb 2017, at 11:16, Jon Yeargers wrote:
>
> If Im do
Thanks for the response Rajini.
It might be nice to support both but really I just need a mechanism to get
hold of the client credentials when using SSL and then to do some extra
custom authentication processing with the credentials. I was thinking
that to do this it would make sense to optional
Christopher,
It is definitely worth writing this up and starting a discussion on the dev
list. A KIP is required if there are changes to public interfaces or
configuration. I imagine this will require some config changes and hence if
you can write up a small KIP, that will be useful for discussion
Rajini,
Thanks for the guidance, I agree that this will probably require some small
config changes so I will start up a KIP in the wiki in the next day or 2
and post it on the dev list to get a discussion started.
Chris
On Mon, Feb 13, 2017 at 8:28 AM, Rajini Sivaram
wrote:
> Christopher,
>
>
Hi,
my kafka client is still with kafka jars - 0.8.2.1
and server updated with - 0.10.0.1
I configured server with :
inter.broker.protocol.version=0.8.2.1
log.message.format.version=0.8.2.1
I'm getting below error on commit offset.
here auto commit is disabled.
Is there any other configurations
and It will not come on every commit offset call.
On Mon, Feb 13, 2017 at 8:08 PM, Upendra Yadav
wrote:
> Hi,
>
> my kafka client is still with kafka jars - 0.8.2.1
> and server updated with - 0.10.0.1
>
> I configured server with :
> inter.broker.protocol.version=0.8.2.1
> log.message.format.ve
Hi
Sorry if this has been asked before. I have not been able to find it.
It seems to me that Kafka serializer has to provide a byte-array for
Kafka. So if you want to send something to Kafka you have to make it
into byte-array form. That may not be very efficient in case the data
you want to
Hi,
I have a ktable and I want to keep entries in it only for that past 24 hours.
How can I do that? I understand rocksdb has support for ttl. Should I set that?
How? Should I use kafka-streams window functionality? Would it remove data from
old windows?
I want to do this because I’m seeing a
I see. The serdes should be fine then.
Could you also check the .sst files on disks and see if their count keep
increasing? If .sst files are not cleaned up in time and disk usage keep
increasing then it could mean that some iterators are still not closed and
hence pin SST files from being deleted
As you read the KTable from a topic via
KStreamBuilder#table("my-table-topic") you should set log cleanup policy
to "compacted" for "my-table-topic").
-Matthias
On 2/13/17 4:49 AM, Eno Thereska wrote:
> Hi Jon,
>
> If I understand your question correctly:
> - any new KTables created by the DSL
+0 (non-binding)
Thanks for compiling a new release candidate.
I get an NullPointerException when setting batch.size=0 on producer config.
This worked before with 0.10.1.1.
See https://issues.apache.org/jira/browse/KAFKA-4761
Regards,
Swen
Am 2/10/17, 5:51 PM schrieb "Ewen Cheslack-Postava" :
Hello,
I have a simple example (or so it would seem) of a stream processor which uses
a persistent state store. Testing on one local Kafka (0.10.1.1) node, this
starts up without problems for a topic with 1 partition. However, if I create a
topic with 3 partitions I’m getting the following exce
Hi Adam,
If you increase the number of partitions in the topic "topic1" after the
state store is created, you'd need to manually increase the number of
partitions in the "app1-store1-changelog" topic as well. Or remove the
topic and let KS recreate it next run. But, either way, hopefully you
don
> If you increase the number of partitions in the topic "topic1" after the
> state store is created, you'd need to manually increase the number of
> partitions in the "app1-store1-changelog" topic as well. Or remove the
> topic and let KS recreate it next run. But, either way, hopefully you
> do
> If you increase the number of partitions in the topic "topic1" after the
> state store is created, you'd need to manually increase the number of
> partitions in the "app1-store1-changelog" topic as well. Or remove the
> topic and let KS recreate it next run. But, either way, hopefully you
> do
Following this answer, I checked that the auto-created "app1-store1-changelog”
topic had 1 partition - which caused the problem.
Creating this topic upfront with 3 partitions (which matches the stream source
partition count) fixes the problem.
However, I think this should be handled somehow diff
Hi Team,
I just started using Kafka. I have a usecase to send XML file or Document
object via Kafka topic using Java. Can you enlight me with the guidance
steps to achieve it??
Please apologize and ignore if I am posting to inappropriate mail address.
Thanks
Prashanth
+91-9677103475
India
Can you try this out with 0.10.2 branch or current trunk?
We put some fixed like you suggested already. Would be nice to get
feedback if those fixed resolve the issue for you.
Some more comments inline.
-Matthias
On 2/13/17 12:27 PM, Adam Warski wrote:
> Following this answer, I checked that th
This Sample program may help you?
http://vvratha.blogspot.com.au/2016/07/sample-kafka-producer-and-consumer.html
On 14 February 2017 at 06:36, Prashanth Venkatesan <
prashanth.181...@gmail.com> wrote:
> Hi Team,
>
> I just started using Kafka. I have a usecase to send XML file or Document
> objec
Adam,
also a FYI: The upcoming 0.10.2 version of the Streams API will be
backwards compatible with 0.10.1 clusters, so you can keep your brokers on
0.10.1.1 and still use the latest Streams API version (including the one
from trunk, as Matthias mentioned).
-Michael
On Mon, Feb 13, 2017 at 1:04
Hi,
We have a Kafka cluster in dev, and ideally I’d like the following ports to
be opened:
9092 -> PLAINTEXT
9093 -> SSL
9094 -> SASL_PLAINTEXT
9095 -> SASL_SSL
The goal is to allow applications to slowly evolve toward 9095 and then
migrate to prod where 9095 is the only port opened.
*Is it poss
Hi,
Is it possible to assign Kerberos users to groups and then set ACL for
these groups ?
The problem is that it’s painful to add every user’s ACL when their
principal is created, so we’re thinking of creating a “public” and a
“confidential” group. Topics would be assigned to either and then if th
Currently, group support is not available. Related JIRA:
https://issues.apache.org/jira/browse/KAFKA-2794
I try to complete the PR submitted by Parth.
On Tue, Feb 14, 2017 at 10:09 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:
> Hi,
>
> Is it possible to assign Kerberos users to g
Yes, it is possible. For PLAINTEXT port, we can add ACLs for principal
"User:ANONYMOUS".
For SSL port, we can add ACLs for SSL username. For SASL port, we can add
ACLs for SASL username.
Each of these users can have their own ACLs permissions.
On Tue, Feb 14, 2017 at 6:37 AM, Stephane Maarek <
st
(sorry many questions on security)
I have a kafka cluster with 3 principals
kafka/kafka-1.hostname@realm.com kafka/kafka-2.hostname@realm.com
kafka/kafka-3.hostname@realm.com
I’m trying to enable ACL and I was reading on the confluent website that I
should setup my brokers to be supe
By default, the SASL username will be the primary part of the Kerberos
principal.
so the config should be "super.users=User:kafka"
On Tue, Feb 14, 2017 at 12:06 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:
> (sorry many questions on security)
>
> I have a kafka cluster with 3 prin
31 matches
Mail list logo