Re: Exposing Kafka Topic to External Parties

2017-06-27 Thread Joe San
rust a certain certificate authority (CA), > you should be able to just sign a new certificate with that CA (without > having to explicitly share said cert with all parties). > > - Samuel > > On Fri, Jun 23, 2017 at 3:10 AM, Joe San wrote: > > > Dear Kafka Users, > &g

Exposing Kafka Topic to External Parties

2017-06-23 Thread Joe San
Dear Kafka Users, Would you consider it a good practice to expose the Kafka topic directly to a 3rd party application? While doing this, I need to satisfy the following: 1. I will have say 10 topics and I would need to make sure that only authorized parties are able to write into the Topic 2. If

Re: Kafka Retention Policy to Indefinite

2017-03-15 Thread Joe San
you expect to store in your largest topic over the life of > the cluster? > > -hans > > > > > > /** > * Hans Jespersen, Principal Systems Engineer, Confluent Inc. > * h...@confluent.io (650)924-2670 > */ > > On Tue, Mar 14, 2017 at 10:36 AM, Joe San wrote

Re: Kafka Retention Policy to Indefinite

2017-03-14 Thread Joe San
* h...@confluent.io (650)924-2670 > */ > > On Tue, Mar 14, 2017 at 10:09 AM, Joe San wrote: > > > Dear Kafka Users, > > > > What are the arguments against setting the retention plociy on a Kafka > > topic to infinite? I was in an interesting discussion with one

Kafka Retention Policy to Indefinite

2017-03-14 Thread Joe San
Dear Kafka Users, What are the arguments against setting the retention plociy on a Kafka topic to infinite? I was in an interesting discussion with one of my colleagues where he was suggesting to set the retention policy for a topic to be indefinite. So how does this play up when adding new broke

Kafka Consumer as a ReactiveStream

2016-07-27 Thread Joe San
Hi Kafka Users, 0down votefavorite I have the following logic in my mind and would like to know if this is a good approach to designing a scalable consumer. I have a topic that has say 10 partitions and I have a con

Ping kafka broker for healthcheck

2016-05-29 Thread Joe San
Is there any such API on the consumer or the producer that I can use to check for the underlying connection to the kafka brokers from my producer? I need to ping Kafka broker every minute and check if there is a connection available? Any suggestions?

Re: Relaying UDP packets into Kafka

2016-05-25 Thread Joe San
What about this one: https://github.com/agaoglu/udp-kafka-bridge On Wed, May 25, 2016 at 6:48 PM, Sunil Saggar wrote: > Hi All, > > I am looking for a kafka producer to receive UDP packets and send that > information to specified topic. Is there a out of box producer which does > this ? > > Ther

Re: Using Multiple Kafka Producers for a single Kafka Topic

2016-05-25 Thread Joe San
with a single Producer instance per JVM? On Wed, May 25, 2016 at 8:41 AM, Ewen Cheslack-Postava wrote: > On Mon, Apr 25, 2016 at 6:34 AM, Joe San wrote: > > > I have an application that is currently running and is using Rx Streams > to > > move data. Now in this applicatio

Re: Which should scale: Producer or Topic

2016-05-24 Thread Joe San
Interesting discussion! What do you mean here by a process? Is that a thread or the JVM process? On Tue, May 24, 2016 at 5:49 PM, Tom Crayford wrote: > Aha, yep that helped a lot. > > One producer per process. There's not really a per producer topic limit. > There's buffering and batching space

Re: Kafka Producer and Buffer Issues

2016-05-23 Thread Joe San
> > Thanks > > Tom Crayford > Heroku Kafka > > On Mon, May 23, 2016 at 10:41 AM, Joe San wrote: > > > In one of our application, we have the following setting: > > > > # kafka configuration > > # ~ > > kafka { > > # comma seperated

Kafka Producer and Buffer Issues

2016-05-23 Thread Joe San
In one of our application, we have the following setting: # kafka configuration # ~ kafka { # comma seperated list of brokers # for e.g., "localhost:9092,localhost:9032" brokers = "localhost:9092,localhost:9032" topic = "asset-computed-telemetry" isEnabled = true # for a detailed l

ConsumerGroupCommand in Kafka 0.8.2

2016-05-11 Thread Joe San
In version 0.9.0.0, we have this beautiful command that would show the offset and the lag in a println as: println("%s, %s, %s, %s, %s, %s, %s" .format("GROUP", "TOPIC", "PARTITION", "CURRENT OFFSET", "LOG END OFFSET", "LAG", "OWNER")) is there an equivalent command that I could use for the 0.

Using Multiple Kafka Producers for a single Kafka Topic

2016-04-25 Thread Joe San
I have an application that is currently running and is using Rx Streams to move data. Now in this application, I have a couple of streams whose messages I would like to write to a single Kafka topic. Given this, I have say Streams 1 to 5 as below: Stream1 - Takes in DataType A Stream2 - Takes in D

Re: Enable Kafka Consumer 0.8.2.1 Reconnect to Zookeeper

2016-02-17 Thread Joe San
ere to see how it does that and when: > > > https://github.com/apache/zookeeper/blob/trunk/src/java/main/org/apache/zookeeper/ClientCnxn.java#L1153 > > On Wed, Feb 17, 2016 at 12:14 PM, Joe San wrote: > > > It is all pretty strange. Here is what I see in my logs as soon a

Re: Enable Kafka Consumer 0.8.2.1 Reconnect to Zookeeper

2016-02-17 Thread Joe San
runk/zookeeperProgrammers.html#ch_zkSessions > > > > In that case, the high level consumer is basically dead, and the > > application should create a new instance of it. > > > > > > On Mon, Feb 15, 2016 at 12:22 PM Joe San > wrote: > > > > > Any i

Enable Kafka Consumer 0.8.2.1 Reconnect to Zookeeper

2016-02-15 Thread Joe San
Any ideas as to which property should I set to enable Zookeeper re-connection? I have the following properties defined for my consumer (High Level Consumer API). Is this enough for a automatic Zookeeper re-connect? val props = new Properties() props.put("zookeeper.connect", zookeeper) props.put("g

Re: kafka helth check

2016-02-15 Thread Joe San
Even I would like to know what options I have got to ping Kafka using the 0.8.2.1 client. Any suggestions please? On Mon, Feb 15, 2016 at 6:28 PM, Franco Giacosa wrote: > Hi, > > To ping kafka for a health check, what are my options if I am using the > java client 0.9.0? > > I know that the con

Kafka Zookeeper Connection Error Listener

2016-02-15 Thread Joe San
I have a Kafka broker and Zookeeper running locally. I use the high level consumer API to read messages from a topic. Now I manually disconnect / shutdown the Zookeeper instance running on my local machine. I can see in my consumer logs the following: 20160215-16:03:43.110+0100 [kafka-consumer-ak

Apache Kafka 0.8.2 Zookeeper Reconnect

2016-02-14 Thread Joe San
Which is the setting that enables me to reconnect to a non existing broker? I have a Kafka broker and Zookeeper running on my localhost. I have a consumer that consumes messages from a topic. I then shutdown the broker and Zookeeper and I can see in my application that it attempts to re-connect: 2

Kafka 0.8.2.1 Topic List

2016-02-14 Thread Joe San
I'm using the following command to start the broker and create the topic: bin/kafka-server-start.sh config/server.properties & bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic read-singnals Prior to doing this, I start zoo keeper: bin/zookeepe

Re: Kafka 0.8.2.0 Log4j

2016-02-12 Thread Joe San
On 12 Feb 2016, at 07:33, Joe San wrote: > > > > How could I get rid of this warning? > > > > log4j:WARN No appenders could be found for logger > > (kafka.utils.VerifiableProperties). > > log4j:WARN Please initialize the log4j system properly. > > > > Any ideas how to get rid of this warning? > >

Re: Kafka 0.8.2.0 Log4j

2016-02-12 Thread Joe San
But still could not get rid of this annoying warning! On Fri, Feb 12, 2016 at 12:58 PM, Joe San wrote: > I have a logback.xml that I use when I do sbt run: > > sbt -Dlogback.configurationFile=app-logger.xml run > > On Fri, Feb 12, 2016 at 12:19 PM, Ben Stopford wrote: > >&

Kafka 0.8.2.0 Log4j

2016-02-11 Thread Joe San
How could I get rid of this warning? log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties). log4j:WARN Please initialize the log4j system properly. Any ideas how to get rid of this warning?

Consumer backwards compatibility

2016-02-11 Thread Joe San
I have a 0.9.0 version of the Kafka consumer. Would that work against the 0.8.2 broker?

Re: Kafka 0.8.2 Consumer Poll Mechanism

2016-02-10 Thread Joe San
any takers for my question? I will anyways try it out today! On Wed, Feb 10, 2016 at 11:32 PM, Joe San wrote: > So basically what I'm doing is the following: > > 1. I'm checking if the time to read the stream has lapsed. If yes, then I > come out of the recursion. > 2.

Re: Kafka 0.8.2 Consumer Poll Mechanism

2016-02-10 Thread Joe San
when I do consumer.commitOffsets()? On Wed, Feb 10, 2016 at 11:30 PM, Joe San wrote: > I tried to mimic the poll method that we have with the new consumer API in > the 0.9.0.0 version. Here is what I have: > > def readFromKafka() = { > val streams = consumerStreamsMap.get(consumerConfig.topic) &g

Kafka 0.8.2 Consumer Poll Mechanism

2016-02-10 Thread Joe San
I tried to mimic the poll method that we have with the new consumer API in the 0.9.0.0 version. Here is what I have: def readFromKafka() = { val streams = consumerStreamsMap.get(consumerConfig.topic) @tailrec def poll(pollConfig: PollConfig, messages: Seq[String]): Seq[String] = { val i

Kafka 0.8.2 ConsumerGroup Example

2016-02-10 Thread Joe San
I'm following the ConsumerGroup example, https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example How can I specify the batch size of the messages that I want to consume? I see that if I use the SimpleConsumer, I can specify a size that I want to read. How can I do it here with th

Kafka 0.8.2 Consumer

2016-02-10 Thread Joe San
In the following bit of code that I got from https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example /** * gives us the leader partition metadata for the given * host and port! there can be only one broker that is * the leader for the given partitionId */ val partitionM

Re: Kafka Consumer for 0.8.x.x

2016-02-09 Thread Joe San
How could I use Kafka for offset management with the 0.8.2 version of Kafka` On Tue, Feb 9, 2016 at 10:54 PM, Yifan Ying wrote: > Please check out consumer configs. > http://kafka.apache.org/082/documentation.html#consumerconfigs > > On Tue, Feb 9, 2016 at 1:16 PM, Joe San wrote:

Re: Kafka Consumer for 0.8.x.x

2016-02-09 Thread Joe San
ere are two parts to it, but you > probably want the high level consumer, documented here: > http://kafka.apache.org/documentation.html#highlevelconsumerapi > > -Ewen > > On Tue, Feb 9, 2016 at 12:36 PM, Joe San wrote: > > > Could anyone point me to an older version of th

Re: Kafka Consumer for 0.8.x.x

2016-02-09 Thread Joe San
ntation had been included yet. > > -Ewen > > On Tue, Feb 9, 2016 at 8:00 AM, Joe San wrote: > > > Is this intentioal in the Kafka 0.8.2.0 version, > > org.apache.kafka.clients.consumer.KafkaConsumer, the method: > > > > public Map> poll(long timeout) { > &

Kafka Consumer for 0.8.x.x

2016-02-09 Thread Joe San
Is this intentioal in the Kafka 0.8.2.0 version, org.apache.kafka.clients.consumer.KafkaConsumer, the method: public Map> poll(long timeout) { return null; } Which version of 0.8.x.x should I use so that I could do a consumer.poll(2000)?

Kafka 0.8.2.0 Conusmer example

2016-02-09 Thread Joe San
Could anyone point me to some sample code where I could get some insights on how to write a consumer to do the following: 1. poll at regular intervals 2. get a set of messages during each poll operation 3. process the messages, comit the offset Thanks, Joe

Kafka 0.9 Offset Management

2016-02-04 Thread Joe San
Could anyone point me to some code samples or some documentation on where I could find more information about Kafka's Offset management? Currently, I'm using the props.put("enable.auto.commit", "true") props.put("auto.commit.interval.ms", "1000") which I guess commits to the Zookeeper and I u

Maximum Offset

2016-02-04 Thread Joe San
What is the maximum Offset? I guess it is tied to the data type? Long.MAX_VALUE? What happens after that? Is the commit log reset automatically after it hits the maximum value?

Re: Producer code to a partition

2016-02-03 Thread Joe San
pass partition number. > > > > https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html > > Kumar > > On Thu, Feb 4, 2016 at 11:41 AM, Joe San wrote: > > > Kafka users, > > > > The code below is something that I have to wr

Producer code to a partition

2016-02-03 Thread Joe San
Kafka users, The code below is something that I have to write to a Topic! def publishMessage(tsDataPoints: Seq[DataPoint]): Future[Unit] = { Future { logger.info(s"Persisting ${tsDataPoints.length} data-points in Kafka topic ${producerConfig.topic}") val dataPoints = DataPoints("kafkaPr

Re: Apache Kafka Case Studies

2016-02-03 Thread Joe San
M, Jens Rantil wrote: > > > Hi Joe, > > > > This might be interesting: > > https://engineering.linkedin.com/kafka/running-kafka-scale > > > > Cheers, > > Jens > > > > On Wed, Feb 3, 2016 at 4:15 PM, Joe San wrote: > > > > > Dear

Apache Kafka Case Studies

2016-02-03 Thread Joe San
Dear Kafka users, I'm looking for some case studies around using Kafka on big projects. Specifically, I'm looking for some architectural insights into how I could orchestrate my data pipeline using Kafka on an enterprise system. Some pointers on some architectural best practices, slides on how so

Shutting down Producer

2016-01-27 Thread Joe San
Is it mandatory to properly shutdown a Kafka producer? I have a single producer instance in my web application. When we deploy / restart this web application, we terminate the JVM process and start the web application all over again afresh. So why should I worry about calling the close method on th

No Kafka Error when no Server

2016-01-26 Thread Joe San
Is this strange or wierd? I had no Kafka or Zookeeper running on my local machine and I was expecting an exception, but for some strange reason, I do not see any errors: try { logger.info(s"kafka producer obtained is ${producer}") producer.send( new ProducerRecord[String, String](producerC

Re: Create Kafka Topic Programatically

2016-01-20 Thread Joe San
e, so the > producer can just start sending data to a topic and the topic will be > created with the default values. > > > > 2016-01-20 13:19 GMT+01:00 Joe San : > > > Kafka Users, > > > > How can I create a kafka topic programatically? > > > > I would li

Create Kafka Topic Programatically

2016-01-20 Thread Joe San
Kafka Users, How can I create a kafka topic programatically? I would like to create the topics when I initialize my application. It should also be in such a way that if the topic already exists, the initialization code should do nothing! Thanks and Regards, Joe

Re: Apache Kafka Topic Design

2016-01-19 Thread Joe San
: > Hi Joe, > > I think you are leaving out all your requirements to be able to get decent > answer from anyone here. > > On Tue, Jan 19, 2016 at 8:47 AM, Joe San wrote: > > > I soon realized that this is a doomed approach. > > > > Could you elaborate? Why would

Apache Kafka Topic Design

2016-01-18 Thread Joe San
Dear Kafka Users, I have been experimenting with Kafka and I'm currently trying to come up with a design for the set of topics that we would need for our use case. We are reading a set of signals from a device and we would be having a set of those device. Say, we have device1, device2 and so on w

Producer Config Error - Kafka 0.9.0.0

2016-01-18 Thread Joe San
Dear Kafka Users, Is there a bug with the producer config? I have asked a question on Stackoverflow: http://stackoverflow.com/questions/34851412/apache-kafka-producer-config-error As per the documentation, I need to only provide bootstrap.servers but, when I run my producer client, I get a mes

Kafka Consumer and Topic Partition

2016-01-14 Thread Joe San
Kafka Users, I have been trying out a simple consumer example that is supposed to read messages from a specific partition of a topic. I'm not able to get two consumer instances up and running. The second consumer instance is idle! Here is the original post that I created: http://stackoverflow.co

Apache Kafka Usage

2016-01-11 Thread Joe San
Dear Apache Kafka users, I'm currently looking into using Apache Kafka on our infrastructure and I came up with lots of questions that I would like to clarify. I have created a Stackoverflow post here: http://stackoverflow.com/questions/34715556/apache-kafka-for-time-series-data-persistence Coul

New to Apache Kafka

2015-09-15 Thread Joe San
Hi Apache Kafka, I'm evaluating Apache Kafka for one of the projects that I'm into. I have used ActiveMQ in the past which makes using Kafka pretty straightforward. One thing that I do not understand is the need for Zookeeper? I understand what Zookeeper is, but I fail to understand what purpose