Question on Metadata

2017-03-14 Thread Syed Mudassir Ahmed
Hi guys, When we consume a JMS message, we get a Message object that has methods to fetch implicit metadata provided by JMS server. http://docs.oracle.com/javaee/6/api/javax/jms/Message.html. There are methods to fetch that implicit metadata such as Expiration, Correlation ID, etc. Is there a

Re: Kafka Streams question

2017-03-14 Thread BYEONG-GI KIM
Dear Michael Noll, I have a question; Is it possible converting JSON format to YAML format via using Kafka Streams? Best Regards KIM 2017-03-10 11:36 GMT+09:00 BYEONG-GI KIM : > Thank you very much for the information! > > > 2017-03-09 19:40 GMT+09:00 Michael Noll : > >> There's actually a dem

Re: Kafka Streams question

2017-03-14 Thread Michael Noll
Yes, of course. You can also re-use any existing JSON and/or YAML library for helping you with that. Also, in general, an application that uses the Kafka Streams API/library is a normal, standard Java application -- you can of course also use any other Java/Scala/... library for the application's

Re: Kafka Streams question

2017-03-14 Thread BYEONG-GI KIM
Thank you very much for the reply. I'll try to implement it. Best regards KIM 2017-03-14 17:07 GMT+09:00 Michael Noll : > Yes, of course. You can also re-use any existing JSON and/or YAML library > for helping you with that. > > Also, in general, an application that uses the Kafka Streams API

Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Ismael Juma
Hi Thomas, Did you follow the instructions: https://kafka.apache.org/documentation/#upgrade Ismael On Mon, Mar 13, 2017 at 9:43 AM, Thomas KIEFFER < thomas.kief...@olamobile.com.invalid> wrote: > I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When > I'm doing the switch be

Re: [DISCUSS] KIP-120: Cleanup Kafka Streams builder API

2017-03-14 Thread Michael Noll
I see Jay's point, and I agree with much of it -- notably about being careful which concepts we do and do not expose, depending on which user group / user type is affected. That said, I'm not sure yet whether or not we should get rid of "Topology" (or a similar term) in the DSL. For what it's wor

Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Thomas KIEFFER
Hello Ismael, Thank you for your feedback. Yes I've done this changes on a previous upgrade and set them accordingly with the new version when trying to do the upgrade. inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0 or 0.10.1). log.message.format.version=CURR

Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Ismael Juma
So, to double-check, you set inter.broker.protocol.version=0.10.0 before bouncing each broker? On Tue, Mar 14, 2017 at 11:22 AM, Thomas KIEFFER < thomas.kief...@olamobile.com.invalid> wrote: > Hello Ismael, > > Thank you for your feedback. > > Yes I've done this changes on a previous upgrade and

Re: Simple KafkaProducer to handle multiple requests or not

2017-03-14 Thread Amit K
Thanks Robert Quinlivan, Got this one clear. Have been going through additional docs, they seems to point to this though bit subtly. Thanks again for your help! On Mon, Mar 13, 2017 at 10:20 PM, Robert Quinlivan wrote: > There is no need to create a new producer instance for each write request

Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Thomas KIEFFER
Yes, I've set the inter.broker.protocol.version=0.10.0 before restarting each broker on a previous update. Clusters currently run with this config. On 03/14/2017 12:34 PM, Ismael Juma wrote: So, to double-check, you set inter.broker.protocol.version=0.10.0 before bouncing each broker? On Tue,

Re: Pattern to create Task with dependencies (DI)

2017-03-14 Thread Mathieu Fenniak
Hey Petr, I have the same issue. But I just cope with it; I wire up default dependencies directly in the connector and task constructors, expose them through properties, and modify them to refer to mocks in my unit tests. It's not a great approach, but it is simple. Why KConnect does take contr

Using Kafka connect BYTES_SCHEMA

2017-03-14 Thread Marc Magnin
Hi, When sending data using SourceRecord, I want to send to Kafka a byte array but I receive a base64 encoded array of my object.  : @Override public SourceRecord[] getRecords(String kafkaTopic) { return new SourceRecord[]{new SourceRecord(null, null, topic.toString(), null,

WordCount example does not output to OUTPUT topic

2017-03-14 Thread Mina Aslani
Hi, I am using below code to read from a topic and count words and write to another topic. The example is the one in github. My kafka container is in the VM. I do not get any error but I do not see any result/output in my output ordCount-output topic either. The program also does not stop either!

Fwd: Question on Metadata

2017-03-14 Thread Syed Mudassir Ahmed
Can anyone help? -- Forwarded message -- From: "Syed Mudassir Ahmed" Date: Mar 14, 2017 12:28 PM Subject: Question on Metadata To: Cc: Hi guys, When we consume a JMS message, we get a Message object that has methods to fetch implicit metadata provided by JMS server. http://doc

Re: Question on Metadata

2017-03-14 Thread Robert Quinlivan
Did you look at the ConsumerRecord class? On Tue, Mar 14, 2017 at 11:09 AM, Syed Mudassir Ahmed < smudas...@snaplogic.com> wrote: > Can anyone help? > > -- Forwarded message -- > From: "S

Re: WordCount example does not output to OUTPUT topic

2017-03-14 Thread Matthias J. Sax
This seems to be the same question as "Trying to use Kafka Stream" ? On 3/14/17 9:05 AM, Mina Aslani wrote: > Hi, > I am using below code to read from a topic and count words and write to > another topic. The example is the one in github. > My kafka container is in the VM. I do not get any error

Re: Trying to use Kafka Stream

2017-03-14 Thread Matthias J. Sax
>> So, when I check the number of messages in wordCount-input I see the same >> messages. However, when I run below code I do not see any message/data in >> wordCount-output. Did you reset your application? Each time you run you app and restart it, it will resume processing where it left off. Thu

Re: Kafka Streams: ReadOnlyKeyValueStore range behavior

2017-03-14 Thread Matthias J. Sax
> However, >> for keys that have been tombstoned, it does return null for me. Sound like a bug. Can you reliable reproduce this? Would you mind opening a JIRA? Can you check if this happens for both cases: caching enabled and disabled? Or only for once case? > "No ordering guarantees are provid

Kafka Retention Policy to Indefinite

2017-03-14 Thread Joe San
Dear Kafka Users, What are the arguments against setting the retention plociy on a Kafka topic to infinite? I was in an interesting discussion with one of my colleagues where he was suggesting to set the retention policy for a topic to be indefinite. So how does this play up when adding new broke

Re: Question on Metadata

2017-03-14 Thread Hans Jespersen
You may also be interested to try out the new Confluent JMS client for Kafka. It implements the JMS 1.1. API along with all the JMS metadata fields and access methods. It does this by putting/getting the JMS metadata into the body of an underlying Kafka message which is defined with a special JM

Re: Kafka Retention Policy to Indefinite

2017-03-14 Thread Hans Jespersen
You might want to use the new replication quotas mechanism (i.e. network throttling) to make sure that replication traffic doesn't negatively impact your production traffic. See for details: https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas This feature was added in 0.10

Re: Kafka Retention Policy to Indefinite

2017-03-14 Thread Joe San
So that means with replication quotas, I can set the retention policy to be infinite? On Tue, Mar 14, 2017 at 6:25 PM, Hans Jespersen wrote: > You might want to use the new replication quotas mechanism (i.e. network > throttling) to make sure that replication traffic doesn't negatively impact >

Re: [DISCUSS] KIP-120: Cleanup Kafka Streams builder API

2017-03-14 Thread Matthias J. Sax
Thanks for your input Michael. >> - KafkaStreams as the new name for the builder that creates the logical >> plan, with e.g. `KafkaStreams.stream("intput-topic")` and >> `KafkaStreams.table("input-topic")`. I don't thinks this is a good idea, for multiple reasons: (1) We would reuse a name for a

Re: Kafka Retention Policy to Indefinite

2017-03-14 Thread Hans Jespersen
I am saying that replication quotas will mitigate one of the potential downsides of setting an infinite retention policy. There is no clear set yes/no best practice rule for setting an extremely large retention policy. It is clearly a valid configuration and there are people who run this way. The

Re: Common Identity between brokers

2017-03-14 Thread Sumit Maheshwari
Can anyone answer the above query? On Mon, Mar 13, 2017 at 3:41 PM, Sumit Maheshwari wrote: > Hi, > > How can we identify if a set of brokers (nodes) belong to same cluster? > I understand we can use the zookeeper where all the brokers pointing to > same zookeeper URL's belong to same cluster. >

Re: Trying to use Kafka Stream

2017-03-14 Thread Mina Aslani
I reset and still not working! My env is setup using http://docs.confluent.io/3.2.0/cp-docker-images/docs/quickstart.html I just tried using https://github.com/confluentinc/examples/blob/3.2.x/kafka-streams/src/main/java/io/confluent/examples/streams/WordCountLambdaExample.java#L178-L181 with all

Re: Trying to use Kafka Stream

2017-03-14 Thread Mina Aslani
Any book, document and provides information on how to use kafka stream? On Tue, Mar 14, 2017 at 2:42 PM, Mina Aslani wrote: > I reset and still not working! > > My env is setup using http://docs.confluent.io/3.2.0/cp-docker-images/ > docs/quickstart.html > > I just tried using https://github.com

Re: Trying to use Kafka Stream

2017-03-14 Thread Eno Thereska
Hi there, I noticed in your example that you are using localhost:9092 to produce but localhost:29092 to consume? Or is that a typo? Is zookeeper, kafka, and the Kafka Streams app all running within one docker container, or in different containers? I just tested the WordCountLambdaExample and i

Re: [DISCUSS] KIP-120: Cleanup Kafka Streams builder API

2017-03-14 Thread Guozhang Wang
I'd like to keep the term "Topology" inside the builder class since, as Matthias mentioned, this builder#build() function returns a "Topology" object, whose type is a public class anyways. Although you can argue to let users always call "new KafkaStreams(builder.build())" I think it is still more

Kafka connection to start from latest offset

2017-03-14 Thread Aaron Niskode-Dossett
Is it possible to start a kafka connect instance that reads from the *latest* offset as opposed to the earliest? I suppose this would be the equivalent of passing auto.offset.reset=earliest to a kafka consumer. More generally, is this something that specific implementations of the kafka connect A

Re: Trying to use Kafka Stream

2017-03-14 Thread Mina Aslani
Hi Eno, Sorry! That is a typo! I have a docker-machine with different containers (setup as directed @ http://docs.confluent.io/3.2.0/cp-docker-images/docs/quickstart.html) docker ps --format "{{.Image}}: {{.Names}}" confluentinc/cp-kafka-connect:3.2.0: kafka-connect confluentinc/cp-enterprise-

Re: Kafka connection to start from latest offset

2017-03-14 Thread Stephen Durfey
Producer and consumer overrides used by the connect worker can be overridden by prefixing the specific kafka config with either 'producer.' or 'consumer.'. So, you should be able to set 'consumer.auto.offset.reset=latest' in your worker config to do that. http://docs.confluent.io/3.0.0/connect/use

Re: Common Identity between brokers

2017-03-14 Thread Hans Jespersen
This might be useful reading as it outlines why Cluster ID was added and lists a few ways that clusters can be identifies prior to that feature enhancement. https://cwiki.apache.org/confluence/display/KAFKA/KIP-78%3A+Cluster+Id

Re: Trying to use Kafka Stream

2017-03-14 Thread Mina Aslani
And the port for kafka is 29092 and for zookeeper 32181. On Tue, Mar 14, 2017 at 9:06 PM, Mina Aslani wrote: > Hi, > > I forgot to add in my previous email 2 questions. > > To setup my env, shall I use https://raw.githubusercontent.com/ > confluentinc/cp-docker-images/master/examples/kafka-singl

Re: Trying to use Kafka Stream

2017-03-14 Thread Mina Aslani
Hi, I forgot to add in my previous email 2 questions. To setup my env, shall I use https://raw.githubusercontent.com/confluentinc/cp-docker-images/master/examples/kafka-single-node/docker-compose.yml instead or is there any other docker-compose.yml (version 2 or 3) which is suggested to setup env

Re: Trying to use Kafka Stream

2017-03-14 Thread Mina Aslani
I even tried http://docs.confluent.io/3.2.0/streams/quickstart.html#goal-of-this-quickstart and in docker-machine ran /usr/bin/kafka-run-class org.apache.kafka.streams.examples.wordcount.WordCountDemo Running docker run --net=host --rm confluentinc/cp-kafka:3.2.0 kafka-console-consumer --bootst

Kafka Topics best practice for logging data pipeline use case

2017-03-14 Thread Ram Vittal
We are using latest Kafka and Logstash versions for ingesting several business apps logs(now few but eventually 100+) into ELK. We have a standardized logging structure for business apps to log data into Kafka topics and able to ingest into ELK via Kafka topics input plugin. Currently, we are usin

Re: Trying to use Kafka Stream

2017-03-14 Thread Mina Aslani
Hi, I just checked streams-wordcount-output topic using below command docker run \ --net=host \ --rm \ confluentinc/cp-kafka:3.2.0 \ kafka-console-consumer --bootstrap-server localhost:9092 \ --topic streams-wordcount-output \ --from-beginning \ --forma