Re: How to clear a particular partition?

2017-08-18 Thread Hans Jespersen
Yes thanks Manikumar! I just tested this and it is indeed all in and working great in 0.11! I thought I would have to wait until 1.0 to be able to use and recommend this in production. I published 100 messages seq 100 | ./bin/kafka-console-producer.sh --broker-list localhost:9092 --to

Re: Making sure all of you know about Kafka Summit

2017-08-18 Thread M. Manna
I guess It's kinda late since I am already in transit for work. Is there any plan to do something in Europe e.g. London or some other place? On 18 Aug 2017 4:41 pm, "Gwen Shapira" wrote: > Hi, > > I figured everyone in this list kinda cares about Kafka, so just making > sure you all know. > > K

Making sure all of you know about Kafka Summit

2017-08-18 Thread Gwen Shapira
Hi, I figured everyone in this list kinda cares about Kafka, so just making sure you all know. Kafka Summit SF happens in about a week: https://kafka-summit.org/events/kafka-summit-sf/ August 28 in San Francisco. It is not too late to register. The talks are pretty great (and very relevant to e

Kafka Transactions in Connect

2017-08-18 Thread Bryan Baugher
I'm interested in knowing if theres any plan or idea to add transactions to connect. We make use of the JDBC source connector and its bulk extract mode. It would be great if the connector could create a transaction around the entire extraction in order to ensure the entire table's data made it int

Re: Different Data Types under same topic

2017-08-18 Thread SenthilKumar K
+ dev experts for inputs. --Senthil On Fri, Aug 18, 2017 at 9:15 PM, SenthilKumar K wrote: > Hi Users , We have planned to use Kafka for one of the use to collect data > from different server and persist into Message Bus .. > > Flow Would Be : > Source --> Kafka --> Streaming Engine --> Repo

Re: How to clear a particular partition?

2017-08-18 Thread Sean Glover
Alternatively you can set topic overrides for retention.bytes. By turning back file.delete.delay.ms that change should be nearly instant after the next log cleanup cycle. # Apply topic config override $ kafka-configs --alter --entity-type topics --entity-name test --zookeeper localhost:32181 --ad

Re: How to clear a particular partition?

2017-08-18 Thread Manikumar
This feature got released in Kafka 0.11.0.0. You can use kafka-delete-records.sh script to delete data. On Sun, Aug 13, 2017 at 11:27 PM, Hans Jespersen wrote: > This is an area that is being worked on. See KIP-107 for details. > > https://cwiki.apache.org/confluence/display/KAFKA/KIP- > 107%3A+

Different Data Types under same topic

2017-08-18 Thread SenthilKumar K
Hi Users , We have planned to use Kafka for one of the use to collect data from different server and persist into Message Bus .. Flow Would Be : Source --> Kafka --> Streaming Engine --> Reports We like to store different types of data in the same topic , same time data should be accessed easil

Re: Querying consumer groups programmatically (from Golang)

2017-08-18 Thread Dan Markhasin
We are also collecting consumer group metrics from Kafka - we didn't want to add extra unnecessary dependencies (such as burrow, which is also overkill for what we need), so we just run a script every minute on the brokers that parses the output of kafka-consumer-groups.sh and uploads it to an http

Re: Avro With Kafka

2017-08-18 Thread Stephen Durfey
Yes, the confluent SerDe's support nested avro records. Underneath the covers they are using avro classes (DatumReader and DatumWriter) to carry out those operations. So, as long as you're sending valid avro data to be produced or consumed, the confluent SerDe's will handle it just fine. __

Re: Different Schemas on same Kafka Topic

2017-08-18 Thread Stephen Durfey
You're welcome. I'm glad it was helpful. I think it is a good idea to maintain a schema that can be evolved per topic and configure the schema registry to the type of Avro evolution rules that fits your use case. While it is possible to have many different non-compatible schemas per topic, it's

Re: Global KTable value is null in Kafka Stream left join

2017-08-18 Thread Damian Guy
Hi, If the userData value is null then that would usually mean that there wasn't a record with the provided key in the global table. So you should probably check if you have the expected data in the global table and also check that your KeyMapper is returning the correct key. Thanks, Damian On

Global KTable value is null in Kafka Stream left join

2017-08-18 Thread Duy Truong
Hi everyone, When using left join, I can't get the value of Global KTable record in ValueJoiner parameter (3rd parameter). Here is my code: val userTable: GlobalKTable[String, UserData] = builder.globalTable(Serdes.String(), userDataSede, userTopic, userDataStore) val jvnStream: KStream[String,

Question on Kafka Producer Transaction Id

2017-08-18 Thread Sameer Kumar
Hi, I have a question on Kafka transaction.id config related to atomic writes feature of Kafka11. If I have multiple producers across different JVMs, do i need to set transactional.id differently for each JVM. Does transaction.id controls the begin and ending of transactions. If its not set uniqu

Re: Continue to consume messages when exception occurs in Kafka Stream

2017-08-18 Thread Duy Truong
OK, I got it, thank you Damian, Eno. On Fri, Aug 18, 2017 at 4:30 PM, Damian Guy wrote: > Duy, if it is in you logic then you need to handle the exception yourself. > If you don't then it will bubble out and kill the thread. > > On Fri, 18 Aug 2017 at 10:27 Duy Truong > wrote: > > > Hi Eno, > >

Re: Continue to consume messages when exception occurs in Kafka Stream

2017-08-18 Thread Damian Guy
Duy, if it is in you logic then you need to handle the exception yourself. If you don't then it will bubble out and kill the thread. On Fri, 18 Aug 2017 at 10:27 Duy Truong wrote: > Hi Eno, > > Sorry for late reply, it's not a deserialization exception, it's a pattern > matching exception in my

Re: Continue to consume messages when exception occurs in Kafka Stream

2017-08-18 Thread Duy Truong
Hi Eno, Sorry for late reply, it's not a deserialization exception, it's a pattern matching exception in my logic. val jvnStream: KStream[String, JVNModel] = sourceStream.leftJoin(userTable, (eventId: String, datatup: (DataLog, Option[CrawlData])) => { datatup._1.rawData.userId

Re: Topic Creation fails - Need help

2017-08-18 Thread Raghav
Broker is 100% running. ZK path shows /broker/ids/1 On Fri, Aug 18, 2017 at 1:02 AM, Yang Cui wrote: > please use zk client to check the path:/brokers/ids in ZK > > 发自我的 iPhone > > > 在 2017年8月18日,下午3:14,Raghav 写道: > > > > Hi > > > > I have a 1 broker and 1 zookeeper on the same VM. I am using K

Re: Topic Creation fails - Need help

2017-08-18 Thread Yang Cui
please use zk client to check the path:/brokers/ids in ZK 发自我的 iPhone > 在 2017年8月18日,下午3:14,Raghav 写道: > > Hi > > I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1. > I am trying to create a topic using below command: > > "bin/kafka-topics.sh --create --zookeeper local

Re: Topic Creation fails - Need help

2017-08-18 Thread Yang Cui
your broker is not running 发自我的 iPhone > 在 2017年8月18日,下午3:14,Raghav 写道: > > Hi > > I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1. > I am trying to create a topic using below command: > > "bin/kafka-topics.sh --create --zookeeper localhost:2181 > --replication-facto

Topic Creation fails - Need help

2017-08-18 Thread Raghav
Hi I have a 1 broker and 1 zookeeper on the same VM. I am using Kafka 10.2.1. I am trying to create a topic using below command: "bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 16 --topic topicTest04" It fails with the below error. Just wondering why

Re: Querying consumer groups programmatically (from Golang)

2017-08-18 Thread Gabriel Machado
Hello, Could you tell me if burrow or remora is compatible with ssl kafka clusters ? Gabriel. 2017-08-16 15:39 GMT+02:00 Gabriel Machado : > Hi Jens and Ian, > > Very usefuls projects :). > What's the difference between the 2 softwares ? > Do they support kafka ssl clusters ? > > Thanks, > Gab