In general, only writers should trigger auto topic creation, but not the
readers. So, a topic can be auto created by the producer, but not the
consumer.
Thanks,
Jun
On Thu, Oct 2, 2014 at 2:44 PM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
> auto.create.topics.enable configuration o
Thank you all, I am able to gradle now, here is my mistake, I install
gradle by apt-get, and from gradle web, but system automatically pick
apt-get gradle to run, and this version is quite outdated, what I did to
apt-get remove gradle, and add higher version gradle to /etc/environment,
now it work
Hmm, not sure what the issue is. You can also just copy the following files
from the 0.8.1 branch.
gradle/wrapper/
gradle-wrapper.jar gradle-wrapper.properties
Thanks,
Jun
On Thu, Oct 2, 2014 at 2:05 PM, Sa Li wrote:
> I git clone the latest kafka package, why can't I build the packa
Can you follow the example in quickstart (
http://kafka.apache.org/documentation.html#quickstart)?
Thanks,
Jun
On Thu, Oct 2, 2014 at 12:01 PM, Sa Li wrote:
> Hi, all
>
> Here I want to run example code associated with kafka package, I run as
> readme says:
>
> To run the demo using scripts:
>
We already cut an 0.8.2 release branch. The plan is to have the remaining
blockers resolved before releasing it. Hopefully this will just take a
couple of weeks.
https://issues.apache.org/jira/browse/KAFKA-1663?filter=-4&jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%2
Hello Apache Kafka community,
auto.create.topics.enable configuration option docs state:
"Enable auto creation of topic on the server. If this is set to true then
attempts to produce, consume, or fetch metadata for a non-existent topic
will automatically create it with the default replication fact
Yes, I did either way, 1: follow the instruction by
http://www.gradle.org/installation, 2. apt-get install gradle.
Thanks
On Thu, Oct 2, 2014 at 2:21 PM, Guozhang Wang wrote:
> Did you installed gradle as the README stated?
>
> "You need to have [gradle](http://www.gradle.org/installation) ins
Just clarify, I am using 3 zkServer ensemble, myid: 1, 2, 3. But in each
kafka node server.properties of each broker, I make zk.connect to
localhost, which means the broker info stored in local zkServer, I know it
is bit of weird, other than assign the broker info automatically by
zkServer leader.
Daniel, thanks for reply
It is still the learn curve to me to setup the cluster, we finally want to
make connection between kafka cluster and storm cluster. As you mentioned,
seems 1 single broker per node is more efficient, is it good to handle
multiple topics? For my case, say I can build the 3-
Did you installed gradle as the README stated?
"You need to have [gradle](http://www.gradle.org/installation) installed."
Guozhang
On Thu, Oct 2, 2014 at 1:55 PM, Sa Li wrote:
> Thanks Guozhang
>
> I tried this as in KAFKA-1490:
>
> git clone https://git-wip-us.apache.org/repos/asf/kafka.git
>
I can't really gradle through, even clone the latest trunk, anyone having
same issue?
On Thu, Oct 2, 2014 at 1:55 PM, Sa Li wrote:
> Thanks Guozhang
>
> I tried this as in KAFKA-1490:
>
> git clone https://git-wip-us.apache.org/repos/asf/kafka.git
>
> cd kafka
>
> gradle
>
>
> but fails to bui
I git clone the latest kafka package, why can't I build the package
gradle
FAILURE: Build failed with an exception.
* Where:
Script '/home/ubuntu/kafka/gradle/license.gradle' line: 2
* What went wrong:
A problem occurred evaluating script.
> Could not find method create() for arguments [downloa
Thanks Guozhang
I tried this as in KAFKA-1490:
git clone https://git-wip-us.apache.org/repos/asf/kafka.git
cd kafka
gradle
but fails to build:
FAILURE: Build failed with an exception.
* Where:
Script '/home/stuser/trunk/gradle/license.gradle' line: 2
* What went wrong:
A problem occurr
Yes, here is a vagrant virtual box setup
https://github.com/stealthly/scala-kafka
On Thu, Oct 2, 2014 at 3:51 PM, Mingtao Zhang
wrote:
> Thanks for the response!
>
> Any one has it working on Virtualbox? which is the case for Winddos/Mac?
>
> How do we configure the network adapter?
>
> Best Reg
Hello Dayo,
This is a known issue, since today Kafka's log rolling / cleaning policy
depends on the creation timestamp of the segment files, which could be
modified upon partition migration / broker restart, it can cause the server
to not honor the specified log cleaning config. Some more details
Hello Sa,
KAFKA-1490 introduces a new step of downloading the wrapper, details are
included in the latest README file.
Guozhang
On Thu, Oct 2, 2014 at 11:00 AM, Sa Li wrote:
> Thanks, Jay,
>
> Here is what I did this morning, I git clone the latest version of kafka
> from git, (I am currently
Thanks for the response!
Any one has it working on Virtualbox? which is the case for Winddos/Mac?
How do we configure the network adapter?
Best Regards,
Mingtao
Best Regards,
Mingtao
On Tue, Sep 30, 2014 at 3:31 PM, Joe Stein wrote:
> << Is there a 'Kafka->HDFS with Camus' docker as well on
Hi, all
Here I want to run example code associated with kafka package, I run as
readme says:
To run the demo using scripts:
+
+ 1. Start Zookeeper and the Kafka server
+ 2. For simple consumer demo, run bin/java-simple-consumer-demo.sh
+ 3. For unlimited producer-consumer run, run
bin/java-
Having the same question: what happened to 0.8.2 release, when it's
supposed to happen?
Thanks.
On Tue, Sep 30, 2014 at 12:49 PM, Jonathan Weeks
wrote:
> I was one asking for 0.8.1.2 a few weeks back, when 0.8.2 was at least 6-8
> weeks out.
>
> If we truly believe that 0.8.2 will go “golden” a
Thanks, Jay,
Here is what I did this morning, I git clone the latest version of kafka
from git, (I am currently using kafka 8.0) now it is 8.1.1, and it use
gradle to build project. I am having trouble to build it. I installed
gradle, and run ./gradlew jar in kafka root directory, it comes out:
Er
Hi,
I've noticed an interesting behaviour which I hope someone can fully
explain.
I have 3 Kafka Node cluster with a setting of log.retention.hours=168 (7
days) and log.segment.bytes=536870912.
I recently restarted one of the nodes and it's uptime is now 3 days behind
than the other 2.
After abo
Thank you, Neha. I appreciate your help.
--
*Have a nice day.*
Regards,
Aniket Kulkarni.
The reassign partition process only completes after the new replicas are
fully caught up and the old replicas are deleted. So, if the old replica is
down, the process can never complete, which is what you observed. In your
case, if you just want to replace a broker host with a new one, instead of
u
What version of zookeeper are you running?
First check to see if there is a znode for the "/admin/reassign_partitions" in
zookeeper.
If so, you could try a graceful shutdown of the controller broker.
Once the new controller leader elects on another broker look at zk the
znode "/admin/reassign_pa
24 matches
Mail list logo