Hi Avinash,
This is working as expected. Since delete topics is not enabled, they get
marked for deletion, but never get deleted. Once you enable topic
deletion (delete.topic.enable=true)
the topics marked get delete.
Once the topic is marked for deletion messages will still be available. It
is w
Hi Mohit,
http://search-hadoop.com/ is great for searching the mailing lists for
information like this.
A quick search for "0.9.0 release" (link:
http://search-hadoop.com/?q=0.9.0+release&startDate=144408960&endDate=144668160&fc_project=Kafka)
shows some great threads on the current statu
+ Kafka Dev team to see if Kafka Dev team know or recommend any Auth
engine for Producers/Consumers.
Thanks,
Bhavesh
Please pardon me, I accidentally send previous blank email.
On Tue, Nov 3, 2015 at 9:52 PM, Bhavesh Mistry
wrote:
> On Sun, Nov 1, 2015 at 11:15 PM, Bhavesh Mistry
> wrote:
>>
On Sun, Nov 1, 2015 at 11:15 PM, Bhavesh Mistry
wrote:
> HI All,
>
> Have any one used Apache Ranger as Authorization Engine for Kafka Topic
> creation, consumption (read) and write operation on a topic. I am looking
> at having audit log and regulating consumption/ write to particular topic
> (
Hi,
I am trying to delete the kafka topic, verison I am using is 2.9.2-0.8.2. I am
using dockerized kafka to delete the topic already created and didn’t set the
“delete.topic.enable=true”. When I am listing the topics, it is giving as
“ marked for deletion”.
Is this a bug? However when I am s
Thanks for detail answer.
Regards,
LCassa
On Tue, Nov 3, 2015 at 10:54 AM, Todd Palino wrote:
> We use loadbalancers for our producer configurations, but what you need to
> keep in mind is that that connection is only used for metadata requests.
> The producer queries the loadbalancer IP for me
Is there a tentative release date for Kafka 0.9.0?
Hi Fajar,
Please see my response to a similar email here:
http://search-hadoop.com/m/uyzND1ifZt65CCBS
If you still have questions, please do not hesitate to ask.
Thank you,
Grant
On Tue, Nov 3, 2015 at 5:07 PM, Fajar Maulana Firdaus
wrote:
> I see, thank you for your explanation, will the cli
I see, thank you for your explanation, will the client of 0.9.0.0 be
backward compatible with 0.8.2.2 kafka?
On Wed, Nov 4, 2015 at 2:52 AM, Ewen Cheslack-Postava wrote:
> 0.9.0.0 is not released yet, but the last blockers are being addressed and
> release candidates should follow soon. The docs
Is there a place where we can find all previously streamed/recorded meetups?
Thank you,
Grant
On Tue, Nov 3, 2015 at 2:07 PM, Ed Yakabosky
wrote:
> I'm sorry to hear that Lukas. I have heard that people are starting to do
> carpools via rydeful.com for some of these meetups.
>
> Additionally,
I'm sorry to hear that Lukas. I have heard that people are starting to do
carpools via rydeful.com for some of these meetups.
Additionally, we will live stream and record the presentations, so you can
participate remotely.
Ed
On Tue, Nov 3, 2015 at 10:43 AM, Lukas Steiblys
wrote:
> This is sa
0.9.0.0 is not released yet, but the last blockers are being addressed and
release candidates should follow soon. The docs there are just staged as we
prepare for the release (note, e.g., that the latest release on the
downloads page http://kafka.apache.org/downloads.html is still 0.8.2.2).
-Ewen
Hannu,
Could you past the related server-side request logs before this exception
thrown if you have any? Particularly, we are interested in the LeaderAndISR
request reception traces.
And to clarify, when you "upgrade the system tests with the newest version"
you mean all the brokers are using thi
Hi,
I saw that there is new kafka client 0.9.0 in here:
http://kafka.apache.org/090/javadoc/index.html So what is the maven
coordinate for this version? I am asking this because it has
KafkaConsumer api which doesn't exist in 0.8.2
Thank you
We use loadbalancers for our producer configurations, but what you need to
keep in mind is that that connection is only used for metadata requests.
The producer queries the loadbalancer IP for metadata for the topic, then
disconnects and reconnects directly to the Kafka brokers for producing
messag
This is sad news. I was looking forward to finally going to a Kafka or Samza
meetup. Going to Mountain View for a meetup is just unrealistic with 2h
travel time each way.
Lukas
-Original Message-
From: Ed Yakabosky
Sent: Tuesday, November 3, 2015 10:36 AM
To: users@kafka.apache.org ;
Hi all,
Two corrections to the invite:
1. The invitation is for November 18, 2015. *NOT 2016.* I was a little
hasty...
2. LinkedIn has finished remodeling our broadcast room, so we are going
to host the meet up in Mountain View, not San Francisco.
We've arranged for speakers from H
Hi,
Has anyone used load balancers between publishers and Kafka brokers? I
want to do active-passive setup of Kafka in two datacenters. My question
is can I add GSLB layer between these two Kafka clusters to configure
automatic fail over while publishing data?
Thanks,
LCassa
I'm copying the Kafka user list here:
> We have a 3 node zookeeper cluster and a kafka cluster (3 nodes) using this
> zookeeper cluster. we want to migrate the zookeeper nodes to better boxes (
> hardware improvements). we already setup 3 new nodes.
>
> can some one tell me what is the safe way t
Hannu,
Thanks for reporting this. Filed
https://issues.apache.org/jira/browse/KAFKA-2730 for further investigation.
If you have more input, please add it to the jira.
Jun
On Tue, Nov 3, 2015 at 6:55 AM, Hannu Valtonen
wrote:
> Hi,
>
> I updated our test system to use Kafka from latest revision
Hi,
I updated our test system to use Kafka from latest revision
7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
[2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request
Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Le
change the order of your commands
*bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test*
On Tue, Nov 3, 2015 at 7:23 AM, Kishore N C wrote:
> Hi all,
>
> I have a 3-node Kafka cluster. I'm running into the following error when I
> try to use the console producer to write to
Thanks for reply.
Maybe it will be useful, I've noticed that after I tried to use client 0.9
with server 0.8.2 my data got corrupted. I wasn't able to read data from
existing topics after I switched back to client 0.8.2.
Cleaning kafka's and zk's data folders and creating topic from scratch
solved
Hi all,
I have a 3-node Kafka cluster. I'm running into the following error when I
try to use the console producer to write to a topic that does *not* yet
exist. I have ensured that "auto.create.topics.enable=true" in
server.properties.
The error:
ubuntu@ip-XX-X-XXX-XX:/usr/local/kafka$ bin/kafk
Hi, Mayuresh. No, this log before restart 61.
But I found some interesting logs about ZK on problem broker:
root@kafka3d:~# zgrep 'zookeeper state changed (Expired)' /var/log/kafka/*/*
/var/log/kafka/2015-10-30/kafka-2015-10-30.log.gz:[2015-10-30 23:02:31,001]
284371992 [main-EventThread] INFO or
Hello,
If I understood topic deletion correctly, the controller waits for all brokers
to ack the partition deletion. So if a partition is assigned to a dead broker,
deletion will never happen.
It seems that rebalancing pretty much works the same way, so we can't rebalance
partitions (replaci
Hi guys,
to get straight to the point, we have a local "cache" (cache in the Guava
sense: https://github.com/google/guava/wiki/CachesExplained) in our server
application that stores info about the current leader (used to construct
SimpleConsumer).
We want to have that information updated as soon as
27 matches
Mail list logo