Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-06-02 Thread Damian Guy
I agree with what Matthias has said w.r.t failing fast. There are plenty of times when you don't want to fail-fast and must attempt to make progress. The dead-letter queue is exactly for these circumstances. Of course if every record is failing, then you probably do want to give up. On Fri, 2 Jun

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-06-02 Thread Jan Filipiak
Hi 1. That greatly complicates monitoring. Fail Fast gives you that when you monitor only the lag of all your apps you are completely covered. With that sort of new application Monitoring is very much more complicated as you know need to monitor fail % of some special apps aswell. In my opini

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-06-02 Thread Damian Guy
Jan, you have a choice to Fail fast if you want. This is about giving people options and there are times when you don't want to fail fast. On Fri, 2 Jun 2017 at 11:00 Jan Filipiak wrote: > Hi > > 1. > That greatly complicates monitoring. Fail Fast gives you that when you > monitor only the lag

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-06-02 Thread Jan Filipiak
IMHO your doing it wrong then. + building to much support into the kafka eco system is very counterproductive in fostering a happy userbase On 02.06.2017 13:15, Damian Guy wrote: Jan, you have a choice to Fail fast if you want. This is about giving people options and there are times when you d

Re: Cluster in weird state: no leaders no ISR for all topics, but it works!

2017-06-02 Thread Del Barrio, Alberto
So, I fixed the problem doing a rolling restart, and after some checks seems there was no data loss. On 1 June 2017 at 17:57, Del Barrio, Alberto < alberto.delbar...@360dialog.com> wrote: > I might give it a try tomorrow. The reason for having so large init and > sync limit times is because in th

Data in kafka topic in Json format

2017-06-02 Thread Mina Aslani
Hi. Is there any way that I get the data into a Kafka topic in Json format? The source that I ingest the data from have the data in Json format, however when I look that data in the kafka topic, schema and payload fields are added and data is not in json format. I want to avoid implementing a tra

Kafka Over TLS Error - Failed to send SSL Close message - Broken Pipe

2017-06-02 Thread IT Consultant
Hi All, I have been seeing below error since three days , Can you please help me understand more about this , WARN Failed to send SSL Close message (org.apache.kafka.common.network.SslTransportLayer) java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method

Re: Data in kafka topic in Json format

2017-06-02 Thread Mina Aslani
Hi, I would like to add that I use kafka-connect and schema-registery version ` 3.2.1-6`. Best regards, Mina On Fri, Jun 2, 2017 at 10:59 AM, Mina Aslani wrote: > Hi. > > Is there any way that I get the data into a Kafka topic in Json format? > The source that I ingest the data from have the d

Re: Data in kafka topic in Json format

2017-06-02 Thread Hans Jespersen
Check which serializer you have configured in your producer. You are probably using an Avro serializer which will add the schema and modify the payload to avro data. You can use a String serializer or a ByteArray serializer and the data will either be Base64 encoded or not encoded at all. -han

Re: Data in kafka topic in Json format

2017-06-02 Thread Hans Jespersen
My earlier comment still applies but in Kafka Connect the equivalent of a serializer/deserializer (serdes) is called a “converter”. Check which converter you have configured for your source connector and if it is overriding whatever the default converter is configured for the connect worker it

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-06-02 Thread Jay Kreps
Jan, I agree with you philosophically. I think one practical challenge has to do with data formats. Many people use untyped events, so there is simply no guarantee on the form of the input. E.g. many companies use JSON without any kind of schema so it becomes very hard to assert anything about the

Re: Data in kafka topic in Json format

2017-06-02 Thread Mina Aslani
Hi Hans, Thank you for your quick response, appreciate it. In *kafka-connect* docker, I see below settings in *kafka-connect.properties* file in *kafka-connect *directory: key.converter.schemas.enable=false key.converter.schema.registry.url=http://kafka-schema-registry: value.converter.schema.re

Losing messages in Kafka Streams after upgrading

2017-06-02 Thread Frank Lyaruu
Hi Kafka people, I'm running an application that pushes database changes into a Kafka topic. I'm also running a Kafka streams application that listens to these topics, and groups them using the high level API, and inserts them to another database. All topics are compacted, with the exception of t

Finding StreamsMetadata with value-dependent partitioning

2017-06-02 Thread Steven Schlansker
I have a KTable and backing store whose partitioning is value dependent. I want certain groups of messages to be ordered and that grouping is determined by one field (D) of the (possibly large) value. When I lookup by only K, obviously you don't know the partition it should be on. So I will build

Re: LDAP integration with kafka brokers

2017-06-02 Thread Nixon Rodrigues
I dont know whether ldap can directly be integrated with kafka client and server, but kafka can use kerberos type authentication. While kerberos can integrate with AD for user store and authentication of user. Hope this is helpfull Nixon On Fri, Jun 2, 2017 at 1:58 AM, Arunkumar wrote: > > Hi

Zookeeper on same server as Kafka

2017-06-02 Thread Meghana Narasimhan
Hi, What are the pros and cons of setting up Zookeeper on the same server as the Kafka broker ? Earlier offsets were being written to zookeeper which was a major overhead but with offsets being written to Kafka now, what other requirements should be taken into consideration for setting up Zookeeper

Re: Losing messages in Kafka Streams after upgrading

2017-06-02 Thread Matthias J. Sax
Hi Frank, yes, retention policy is based on the embedded record timestamps and not on system time. Thus, if you send messages with an old timestamp, they can trigger log/segment rolling. >> I see that the repartition topics have timestamp.type = CreateTime, does >> that mean it uses the timestamp

Suppressing intermediate topics feeding (Global)KTable

2017-06-02 Thread Steven Schlansker
Hi everyone, another question for the list :) I'm creating a cluster of KTable (and GlobalKTable) based off the same input stream K,V. It has a number of secondary indices (think like a RDBMS) K1 -> K K2 -> K etc These are all based off of trivial mappings from my main stream that also feeds the

Re: Zookeeper on same server as Kafka

2017-06-02 Thread Mohammed Manna
Usually, the overhead comes when you have kafka and zookeeper doing the housekeeping (i.e. Disk IO) on the same server. ZK even suggests that you should keep their logs on totally different physical machine for better performance. Furthermore, if a mechanical failure occurs, you might not want both

Re: Finding StreamsMetadata with value-dependent partitioning

2017-06-02 Thread Matthias J. Sax
I am not sure if I understand the use case correctly. Could you give some more context? > backing store whose partitioning is value dependent In infer that you are using a custom store and not default RocksDB? If yes, what do you use? What does "value dependent" mean in this context? Right now,

Re: Suppressing intermediate topics feeding (Global)KTable

2017-06-02 Thread Matthias J. Sax
Hi, If you want to populate a GlobalKTable you can only do this by reading a topic. So the short answer for you head line is: no, you can suppress the intermediate topic. However, I am wondering what the purpose of you secondary index is, and why you are using a GlobalKTable for it. Maybe you can

Re: Finding StreamsMetadata with value-dependent partitioning

2017-06-02 Thread Steven Schlansker
> On Jun 2, 2017, at 2:11 PM, Matthias J. Sax wrote: > > I am not sure if I understand the use case correctly. Could you give > some more context? Happily, thanks for thinking about this! > >> backing store whose partitioning is value dependent > > In infer that you are using a custom store

Re: Suppressing intermediate topics feeding (Global)KTable

2017-06-02 Thread Steven Schlansker
> On Jun 2, 2017, at 2:21 PM, Matthias J. Sax wrote: > > Hi, > > If you want to populate a GlobalKTable you can only do this by reading a > topic. So the short answer for you head line is: no, you can suppress > the intermediate topic. Bummer! Maybe this is an opt-in optimization to consider

Re: Finding StreamsMetadata with value-dependent partitioning

2017-06-02 Thread Matthias J. Sax
Thanks. That helps to understand the use case better. Rephrase to make sure I understood it correctly: 1) you are providing a custom partitioner to Streams that is base on one field in your value (that's fine with regard to fault-tolerance :)) 2) you want to use interactive queries to query the s

Re: Suppressing intermediate topics feeding (Global)KTable

2017-06-02 Thread Matthias J. Sax
Makes sense now (considering the explanation form the other thread). With regard to an "opt-in" optimization. We could simplify the API and hide some details, but it wouldn't buy you anything from an execution point of view. As you need all data on each instance, you need to somehow "broadcast" th

Re: Suppressing intermediate topics feeding (Global)KTable

2017-06-02 Thread Steven Schlansker
You're entirely right. I'd forgotten that each instance would only read a subset of the main topic. Should have figured that out myself. Thanks for the sanity check! :) > On Jun 2, 2017, at 3:41 PM, Matthias J. Sax wrote: > > Makes sense now (considering the explanation form the other thread)

Re: Finding StreamsMetadata with value-dependent partitioning

2017-06-02 Thread Steven Schlansker
> > On Jun 2, 2017, at 3:32 PM, Matthias J. Sax wrote: > > Thanks. That helps to understand the use case better. > > Rephrase to make sure I understood it correctly: > > 1) you are providing a custom partitioner to Streams that is base on one > field in your value (that's fine with regard to f

Re: Data in kafka topic in Json format

2017-06-02 Thread Hans Jespersen
You have shared the Kafka connect properties but not the source connector config. Which source connector are you using? Does it override the default settings you provided? Are you running the connector in standalone mode or distributed mode? Also what are you using to consume the messages and see

Connector configuration is invalid

2017-06-02 Thread mayank rathi
Hello All, I am getting "Connector configuration is invalid" error with following configuration. Can someone please help me in finding out what I am doing wrong here name=topic_see connector.class=io.confluent.connect.jdbc.JdbcSourceConnector tasks.max=1 connection.url=jdbc:db2://*.*.*.com:6/