I added an issue to Jira regarding this with an included patch on trunk:
https://issues.apache.org/jira/browse/KAFKA-675
This saves us the hassle of firewalling Kafka in our environments which is
a definite win. We have a patched 0.7.2 version which works fine for us.
On Fri, Dec 14, 2012 at
I see.
Thanks for your response, Tom.
Jason
On Mon, Dec 17, 2012 at 3:41 PM, Tom Brown wrote:
> Each message does not have a time stamp. Groups of messages (I think the
> default is around 500mb) are stored in individual files, and the time stamp
> parameter will find the offset at the beginnin
Each message does not have a time stamp. Groups of messages (I think the
default is around 500mb) are stored in individual files, and the time stamp
parameter will find the offset at the beginning of the file that has that
time stamp-- not really helpful for your use case.
The accepted solution is
+1 to what Russel said we could leave the repo but have just left in it a
repository and in the README send people that see it to
https://github.com/apache/kafka
On Mon, Dec 17, 2012 at 2:44 PM, Joel Koshy wrote:
> +1 on deleting it. Before doing that, do you think it makes sense to create
> a s
+1 on deleting it. Before doing that, do you think it makes sense to create
a stand-alone historical branch (in apache-git) off the old github master
to preserve the history? i.e., we probably will never need to look back
that far, but if we *ever* want to look back that far we can.
Joel
On Mon
For seo purposes, suggest to empty it and leave a link to the new repo
On Dec 17, 2012 11:05 AM, "Jay Kreps" wrote:
> Any objections to my deleting the old github kafka repository. We kept it
> around for posterity and to preserve the version control history. But I
> found that it confuses people
Hm, alright. Haven't really used the method to anything besides getting
first and last offset (using -1 and -2 as timestamps IIRC) of a
topic+partition combination.
Maybe someone else can shed some light on this?
Cheers,
Mathias
On 17 December 2012 19:51, Jason Huang wrote:
> Mathias,
>
> Tha
The zookeeper connections are persistent, so it depends on the number of
clients more than the data flow rate on the producer side. If you are
using a VIP based producer, then there is no connection from the
producer process to zookeeper at all. If you are using a zookeper based
producer, then yo
Any objections to my deleting the old github kafka repository. We kept it
around for posterity and to preserve the version control history. But I
found that it confuses people as to where we are hosted:
https://github.com/kafka-dev/kafka
We had an active replica of our apache git/svn replica, an
Mathias,
Thanks for response. I am not sure if this timestamp is the Unix time
or not. I've tried the following:
Create 3 messages of the same topic, at the same partition like this:
1355769714152: Jason has a new message 1
1355769964900: Jason has a new message 2
1355769980296: Jason has a new m
The SimpleConsumer API [1] has a method called getOffsetsBefore which takes
a topic, partition, timestamp (UNIX I assume since it's a long) and integer
limit on how many offsets to get.
Might not solve your problem *exactly*, but could be useful, unless you're
using the ConsumerConnector?
[1]: ht
Hello,
Is it possible to fetch messages from the Kafka message queue since a
specific time? For example, a user may subscribe to a topic and the
producer will continuously publish messages related to this topic. The
first time this user logs in, we will fetch all the messages from the
beginning. H
Currently, partition is the smallest unit that we distribute data among
consumers (in the same consumer group). So, if the # of consumers is larger
than the total number of partitions in a Kafka cluster (across all
brokers), some consumers will never get any data.
Thanks,
Jun
On Mon, Dec 17, 201
You will need to ask the Sensi guys. Likely, 0.7.6 corresponds to some 0.7
revision. However, if you use the released 0.7.2 jar, it is likely to work.
Thanks,
Jun
On Mon, Dec 17, 2012 at 12:53 AM, 永辉 赵 wrote:
>
> Hi all,
>
> I find SenseDB uses kafka 0.7.6,but I didn't find this branch or tag
0.7.2 is the latest release, there is no other release after on the 0.7.X code
path as of yet
On Dec 17, 2012, at 3:53 AM, 永辉 赵 wrote:
>
> Hi all,
>
> I find SenseDB uses kafka 0.7.6,but I didn't find this branch or tag in
> github.
> There are only 0.8, 0.7.2, 0.7.1 and 0.7 branches.
>
>
15 matches
Mail list logo