thanks Jun
On Feb 21, 2013, at 11:26 PM, Jun Rao wrote:
> The remaining 0.8 blockers are tracked in
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+development
>
> Thanks,
>
> Jun
>
> On Thu, Feb 21, 2013 at 9:14 PM, graham sanderson wrote:
>
>> Thanks, so if I underst
Sounds good. Thanks for the input, kind sir!
Jay Kreps wrote:
You can do this and it should work fine. You would have to keep adding
machines to get disk capacity, of course, since your data set would
only grow.
We will keep an open file descriptor per file, but I think that is
okay. Just set t
The remaining 0.8 blockers are tracked in
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+replication+development
Thanks,
Jun
On Thu, Feb 21, 2013 at 9:14 PM, graham sanderson wrote:
> Thanks, so if I understand what you are saying is that 0.8 is alpha-ish;
> i.e. everything pretty muc
Encoder/Decoder expects non-null return values. So, you should throw
exceptions instead.
Thanks,
Jun
On Thu, Feb 21, 2013 at 5:04 PM, Adam Talaat wrote:
> Are there any caveats to returning null from an Encoder or Decoder? I'm
> planning on doing this if there is a problem de/serializing a mes
Thanks, so if I understand what you are saying is that 0.8 is alpha-ish; i.e.
everything pretty much implemented (is there anything major missing - hard to
tell looking at JIRA without much context) - we use at our own risk (which
we're fine with), but that given we really want 0.8 features we c
0.8 is a significant rewrite since the 0.7 codebase. Hence, there is no
easy way to "turn off" replication and expect 0.7.2 like functionality.
Having said that, it is somewhat stable so you can use it for prototyping.
Thanks,
Neha
On Thu, Feb 21, 2013 at 1:42 PM, graham sanderson wrote:
> So
Apologies for asking another question as a newbie without having really tried
stuff out, but actually one of our main reasons for wanting to use kafka (not
the linkedin use case) is exactly the fact that the "buffer" is not just for
buffering. We want to keep data for days to weeks, and be able
You can do this and it should work fine. You would have to keep adding
machines to get disk capacity, of course, since your data set would
only grow.
We will keep an open file descriptor per file, but I think that is
okay. Just set the segment size to 1GB, then with 10TB of storage that
is only 10
Are there any caveats to returning null from an Encoder or Decoder? I'm
planning on doing this if there is a problem de/serializing a message,
rather than throwing an exception.
Forever is a long time. The definition of replay and navigating through
different versions of kafka would be key.
Example:
If you are storing market data into kafka and have a cep engine running on
top and would like replay "transactions" to be fed back to ensure
replayability, then you would prob
Anthony,
Is there a reason you wouldn't want to just push the data into something
built for cheap, long-term storage (like glacier, S3, or HDFS) and perhaps
"replay" from that instead of from the kafka brokers? I can't speak for
Jay, Jun or Neha, but I believe the expected usage of Kafka is essen
The two simplest approaches (short of parsing SBT output for classpaths)
would be to either use https://github.com/n8han/conscript or
https://github.com/sbt/sbt-assembly. Assembly would give you a nice,
self-contained JAR with all deps. Conscript essentially uses SBT to fetch
deps and run. I'm more
Our use case is that we'd like to log data we don't need away and
potentially replay it at some point. We don't want to delete old logs. I
googled around a bit and I only discovered this particular post:
http://mail-archives.apache.org/mod_mbox/incubator-kafka-users/201210.mbox/%3CCAFbh0Q2=eJcDT
Hi Derek,
We probably have a patch for adding assembly at
https://issues.apache.org/jira/browse/KAFKA-733. Can you review it?
Thanks,
Swapnil
On 2/21/13 2:46 PM, "Derek Chen-Becker" wrote:
>The two simplest approaches (short of parsing SBT output for classpaths)
>would be to either use https:/
I'm having trouble building the project with sbt, specifically I am
unable to run package and have the kafka-server-start.sh script work
git clone git://github.com/apache/kafka.git
./sbt update
./sbt "++2.8.0 package"
./bin/kafka-server-start.sh config/server.properties
Exception in thread "mai
So I am excited that 0.8 is getting close.
I understand that 0.8 is pre-beta, but to what extent would you say the bugs
are mostly with the new redundancy code vs regressions in existing
functionality; i.e. if wanting to prototype and deploy something over the next
month or so with the view to
I've been having trouble building trunk since the changes to sbt in 0.8.
Is there some documentation on building and running trunk?
On 2/21/13 3:53 PM, Neha Narkhede wrote:
HEAD is good as of today and has been stable for past few days. There are
some bugs we are working on but you can certain
Kafka does not have a simple ping command that clients can use. I see that
such a command is useful comparable to the "ruok" command that zookeeper
provides. This is also useful for scripting up a simple Kafka healthcheck.
Currently, the only option is to use one of the non produce/consume
commands
HEAD is good as of today and has been stable for past few days. There are
some bugs we are working on but you can certainly run the cluster and do
some basic send/receive operations. Also, let us know if you have feedback
on the APIs, protocols, tools etc since that takes some time to refactor
and
Thanks Neha,
That's helpful info. Is there a reasonable checkpoint rev to check out now
and experiment with, or is HEAD as good as anything else?
Jason
On Thu, Feb 21, 2013 at 3:31 PM, Neha Narkhede wrote:
> Hi Jason,
>
> We are closely monitoring the health of one of our production clusters t
Hi Jason,
We are closely monitoring the health of one of our production clusters that
has the 0.8 code deployed. This cluster is feeding off of LinkedIn's
production traffic. Once this cluster is fairly stable, we'd like to run
all of our tools and ensure those are working. Another thing we are tr
Just wanted to inquire as to the status 0.8 being released to beta.
I have several use cases now that would like to take advantage of the new
features in 0.8, and I'm not sure if it makes sense to keep waiting for an
actual release, before attempting to use the latest HEAD version in
staging/produ
Ok first I apologize for not having the serve-properties attached last
night. You will find it attached to this email.
Second I was trying with putty before to check the connection. It was
working with raw data but you're right it's not working with a telnet.
I have already opened kafka and zookee
23 matches
Mail list logo