Hi folks;
does anyone know of Kafka's ability to work over Satellite links. We have a IoT
Telemetry application that uses Satellite communication to send data from
remote sites to a Central hub.
Any help/ input/ links/ gotchas would be much appreciated.
Regards,Jan
ou.
Christian
On Wed, Mar 2, 2016 at 9:52 PM, Jan wrote:
> Hi folks;
> does anyone know of Kafka's ability to work over Satellite links. We have a
> IoT Telemetry application that uses Satellite communication to send data from
> remote sites to a Central hub.
> Any help/ i
u mention,
sorry, but maybe it's worth a look through that page.
I have to admit I'd never heard of ECCN classifications and am
surprised it even exists.
cheers
jan
On 27/06/2017, Axelle Margot wrote:
> Hello,
>
> You were contacted as part of a new project in France.
>
>
dling/embedding of ASF products, encryption
reporting, and shipping documentation."
I agree with you, it seems bizarre, and wrong.
jan
On 28/06/2017, Martin Gainty wrote:
>
> MG>am requesting clarification below
>
> From: Axelle Margot
> Se
Hi,
I'd is this the right place to ask about cockroachDB?
(well he started it, officer...)
jan
On 07/07/2017, David Garcia wrote:
> “…events so timely that the bearing upon of which is not immediately
> apparent and are hidden from cognitive regard; the same so tardy, they
>
You could checkout <https://svn.apache.org/repos/asf/kafka/> and see
if it has any statements on logo use.
Also top 3 hits of <https://www.google.co.uk/search?q=use+logo+apache>
sound promising but I've not looked at them.
Best I can suggest ATM
jan
On 01/08/2017, Sunil, Rinu wrote:
>
I can't help you here but maybe can focus the question - why would you want to?
jan
On 10/08/2017, Sven Ludwig wrote:
> Hello,
>
> assume that all producers and consumers regarding a topic-partition have
> been shutdown.
>
> Is it possible in this situation to empty that
I'm not sure I can answer your question, but may I pose another in
return: why do you feel having a memory mapped log file would be a
good thing?
On 09/02/2018, YuFeng Shen wrote:
> Hi Experts,
>
> We know that kafka use memory mapped files for it's index files ,however
> it's log files don't us
opying
is always horrendously expensive (it isn't), that memory mapping is
always cheap (it isn't cheap),"
A bit vague on my part, but HTH anyway
jan
On 12/02/2018, YuFeng Shen wrote:
> Hi jan ,
>
> I think the reason is the same as why index file using memory mapped fil
eally
answer your question, just suggest a bit of reading and some
guesswork.
cheers
jan
On 13/02/2018, YuFeng Shen wrote:
> If that is like what you said , why index file use the memory mapped file?
>
>
> From: jan
> Sent: Monday, February 12,
I may be missing a trick.
7. I wouldn't argue but I'd warn that some abstractions can be
expensive and I suspect shapeless may be one. Also, for parsers may I
suggest looking at ANTLR?
Idiomatic scala code can be expensive *as curremtly implemented*. Just
understand that cost by profiling,
put a clear caveat in the documentation, please, right at the top?
jan
On 07/08/2018, M. Manna wrote:
> The answer is - Absolutely not. If you don’t have Linux rack, or Kubernetes
> deployment -it will not work on Windows as guaranteed.
>
> I know this because I have tried to m
This is an excellent suggestion and I intend to do so henceforth
(thanks!), but it would be an adjunct to my request rather than the
answer; it still needs to be made clear in the docs/faq that you
*can't* use windows directly.
jan
On 07/08/2018, Rahul Singh wrote:
> I would recomme
rience) but was informed I now had two (n00bness, and
an unsupported platform).
tl;dr if it doesn't work on X, we need to say so clearly. It's just...
good manners, surely?
cheers
jan
On 07/08/2018, M. Manna wrote:
> By fully broken, i mean not designed and tested to work on Windows
on't do
it" in the FAQ (which I did read), and also have other people need to
ask here whether windows is supported?
Why?
This is just nuts.
cheers
jan
On 05/09/2018, Liam Clarke wrote:
> Hi Jan,
>
> I'd presume that downloading an archive and seeing a bunch of .sh fi
than just take. I had
been working on ANTLR stuff (which I'll return to when bits of me stop
hurting) and currently am trying to suss if permutation encoding can
be done from L1 cache for large permutations in less than a single
DRAM access time. Look up cuckoo filters to see why.
jan
On 05/09/2
I may have missed this (I'm missing the first few messages), so sorry
in advance if I have, but what OS are you using?
Kafka does not work well on windows, I had problems using it that
sounds a little like this (just a little though) when on win.
jan
On 30/11/2018, Satendra Pratap Singh
orking gods.
These are points to mull over. Doubt I can suggest anything further. Good luck.
jan
On 02/09/2020, cedric sende lubuele wrote:
> Let me introduce myself, my name is Cedric and I am a network engineer
> passionate about new technologies and as part of my new activity, I am
> in
ing (I'm not affiliated in any way).
The first question I'd ask myself is, would a burn-to-dvd solution
work? Failing that, basic stuff like email?
In any case, what if the data's corrupted, how can the server's detect
and re-request? What are you protecting against exactly? Stuf
It might be best to do a web search for companies that know this stuff
and speak to them.
re. kafka over UDP I dunno but perhaps instead do normal kafka talking
to a proxy machine via TCP and have that proxy forward traffic via
UDP.
If that works, would simplify the problem I guess.
cheers
jan
at
branch I think). It's cost me some days.
So, am I making a mistake, if so what?
thanks
jan
n
replicate the issue but I would like to know whether it *should* work
in windows.
cheers
jan
On 18/04/2017, Serega Sheypak wrote:
> Hi,
>
> [2017-04-17 18:14:05,868] ERROR Error when sending message to topic
> big_ptns1_repl1_nozip with key: null, va
ot java's).
Maybe it is GC holding things up but I dunno, GC even for a second or
two should not cause a socket failure, just delay the read, though I'm
not an expert on this *at all*.
I'll go over the answers tomorrow more carefully but thanks anyway!
cheers
jan
On 18/04/2017, Serega
directory grows.
It's possible kafkacat or other producers would do a better job than
the console producer but I'll try that on linux as getting them
working on windows, meh.
thanks all
jan
On 18/04/2017, David Garcia wrote:
> The “NewShinyProducer” is also deprecated.
>
>
ilure.
"
<https://zookeeper.apache.org/doc/r3.4.10/zookeeperStarted.html>
cheers
jan
On 30/04/2017, Michal Borowiecki wrote:
> Svante, I don't share your opinion.
> Having an even number of zookeepers is not a problem in itself, it
> simply means you don't get any be
lso please in future give a fuller picture of your setup eg. OS, OS
version, memory, number of cpus, what actual hardware (PCs are very
different from servers), etc
cheers
jan
On 17/05/2017, 陈 建平Chen Jianping wrote:
> Hi Group,
>
> Recently I am trying to turn Kafka write performance to
-1 for putting the serializers back in.
Looking forward to replies that can show me the benefit of serializes
and especially how the
Type => topic relationship can be handled nicely.
Best
Jan
On 25.11.2014 02:58, Jun Rao wrote:
Hi, Everyone,
I'd like to start a discussion on wheth
before acknowledge. If so, these would be
some additional milliseconds to respond faster if we could spare
de/recompression.
Those are my thoughts about server side de/recompression. It would be
great if I could get some responses and thoughts back.
Jan
On 07.11.2014 00:23, Jay Kreps w
Hey,
try to not have newlines \n in your jsonfile. I think the parser dies on
those and then claims the file is empty
Best
Jan
On 13.04.2015 12:06, Ashutosh Kumar wrote:
Probably you should first try to generate proposed plan using --generate
option and then edit that if needed.
thanks
performance
persistence). Maybe we could cooperate and share some code.
More details about MapDB:
https://github.com/jankotek/mapdb
Regards,
Jan Kotek
performance
persistence). Maybe we could cooperate and share some code.
More details about MapDB:
https://github.com/jankotek/mapdb
Regards,
Jan Kotek
No, there are no benchmarks yet.
Also MapDB does not have TCP support.
Jan
On Friday 11 January 2013 13:28:40 B.e. wrote:
> Have you benchmarked against java-chronicle?
>
> Thx
>
> Sent from my iPad
>
> On Jan 10, 2013, at 9:11 PM, Jan Kotek wrote:
> > Hi,
&g
Hi,
I am starting with kafka. We use version 0.7.2 currently. Does anyone know
wether automatic producer load balancing based on zookeeper is supported by
the c++ client?
Thank you!
-- Jan
Hi,
I am starting with kafka. We use version 0.7.2 currently. Does anyone know
wether automatic producer load balancing based on zookeeper is supported by
the c++ client?
Thank you!
-- Jan
barrier.await() they have to wait until A receives a message. This can
possible block all consuming.
Is there a best practice on committing properly in a multithreaded consumer?
Thank you!
Jan
Thank you!
So I guess, you suggest a really really small timeout so that the other
consuming threads don't get regularly blocked for the timeout period? My
consumer use case does not allow having "longer" breaks because there are
some high traffic topics.
Thanks
Jan
2013/8/10
ry mapped files.
Not sure how it applies to this case.
Regards,
Jan Kotek
On Friday 02 August 2013 22:19:34 Jay Kreps wrote:
> Chris commented in another thread about the poor compression performance in
> 0.8, even with snappy.
>
> Indeed if I run the linear log write throughput te
compacted so far and the application would just
pull up to this point?
Looking forward for some recommendations and comments.
Best
Jan
ion attempt to avoid the
calling thread being blocked forever. Is this possible with the current
version of the client? (Snapshot as of 16/6/15). If not, is that something
that's planned for the future?
Jan
Hi,
you might want to have a look here:
http://kafka.apache.org/documentation.html#topic-config
_segment.ms_ and _segment.bytes _ should allow you to control the
time/size when segments are rolled.
Best
Jan
On 16.06.2015 14:05, Shayne S wrote:
Some further information, and is this a bug
ds on KafkaConsumer), so the client holds a
lock while sitting in this wait. This means that if another thread tries
to call close(), which is all synchronized, this thread will also be
blocked.
Holding locks while performing network I/O seems like a bad idea - is this
something that's planned to be f
d thus the
last segment will never be compacted.
Thanks!
Shayne
On Wed, Jun 17, 2015 at 5:58 AM, Jan Filipiak
wrote:
Hi,
you might want to have a look here:
http://kafka.apache.org/documentation.html#topic-config
_segment.ms_ and _segment.bytes _ should allow you to control the
time/size when segm
Sounds good, thanks for the clarification.
Jan
On 17 June 2015 at 22:09, Jason Gustafson wrote:
> We have a couple open tickets to address these issues (see KAFKA-1894 and
> KAFKA-2168). It's definitely something we want to fix.
>
> On Wed, Jun 17, 2015 at 4:21 AM,
Hi,
just out of curiosity and because of Eugene's email, I browsed
Kafka-1477 and it talks about SSL alot. So I thought I might throw in
this http://tools.ietf.org/html/rfc7568 RFC. It basically says move away
from SSL now and only do TLS. The title of the ticket still mentions TLS
but afterw
copied
Or one would check the file size before.
Please let me know If you would consider this useful and is worth a
feature ticket in Jira.
Thank you
Jan
Sorry wrong mailing list
On 24.07.2015 16:44, Jan Filipiak wrote:
Hello hadoop users,
I have an idea about a small feature for the getmerge tool. I recently
was in the need of using the new line option -nl because the files I
needed to merge simply didn't had one.
I was merging all the
Hi,
just want to pick this up again. You can always use more partitions to
reduce the number of keys handled by a single broker and parallelize the
compaction. So with sufficient number of machines and the ability to
partition I don’t see you running into problems.
Jan
On 07.10.2015 05:34
tricks and find out what the lag
is caused by and then fix whatever causes the lag. Its 1 am in Germany
there might be off by one errors in the algorithm above.
Best
Jan
On 04.11.2015 18:13, Otis Gospodnetić wrote:
This is an aancient thread, but I thought I'd point to
)
}
logger.info("The offset of the record we just sent is: " +
recordMetadata.offset())
()
}
})
I am using the metrics() member to periodically look at
"buffer-available-bytes" and I see it is constantly decreasing over time as
messages are being sent.
Jan
sent to the
partitions with '-1' and th eproducer buffer becomes exhausted afetr a while
(maybe that is related?)
Jan
Topic:capture PartitionCount:16 ReplicationFactor:1 Configs:
Topic: capture Partition: 0Leader: 1 Replicas: 1
perspective I am unable to detect that
messages are not being sent out. Is this normal behavior and I am simply doing
something wrong or could it be a producer bug?
Jan
Config and code again:
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -> brokers,
ProducerConfig.RETRIES_CONFIG -&
how to detect a broker being down)
Jan
> On 22 Nov 2015, at 21:42, Todd Palino wrote:
>
> Hopefully one of the developers can jump in here. I believe there is a
> future you can use to get the errors back from the producer. In addition,
> you should check the following configs
Hey guys,
Is someone using the kafka rest proxy from confluent?
We have an issue, that all messages for a certain topic end up in the same
partition. Has anyone faced this issue before? We're not using a custom
partitioner class, so it's using the default partitioner. We're sending
ONFIG, "true");
consumerProperties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
Regards
Jan
> On 2 Mar 2016, at 15:14, Péricé Robin wrote:
>
> Hello everybody,
>
> I'm testing the new 0.9.0.1 API and I try to make a basic example worki
Hi,
I am very exited about all of this in general. Sadly I haven’t had the
time to really take a deep look. One thing that is/was always a
difficult topic to resolve many to many relationships in table x table x
table joins is the repartitioning that has to happen at some point.
From the doc
Hi Eno,
On 07.06.2017 22:49, Eno Thereska wrote:
Comments inline:
On 5 Jun 2017, at 18:19, Jan Filipiak wrote:
Hi
just my few thoughts
On 05.06.2017 11:44, Eno Thereska wrote:
Hi there,
Sorry for the late reply, I was out this past week. Looks like good progress
was made with the
Hi,
have you thought about using connect to put data into a store that is
more reasonable for your kind of query requirements?
Best Jan
On 07.06.2017 00:29, Steven Schlansker wrote:
On Jun 6, 2017, at 2:52 PM, Damian Guy wrote:
Steven,
In practice, data shouldn't be migrating that
Depends, embedded postgress puts you into the same spot.
But if you use your state store change log to materialize into a
postgress; that might work out decently.
Current JDBC doesn't support delete which is an issue but writing a
custom sink is not to hard.
Best Jan
On 07.06.2017
Hi Eno,
I am less interested in the user facing interface but more in the actual
implementation. Any hints where I can follow the discussion on this? As
I still want to discuss upstreaming of KAFKA-3705 with someone
Best Jan
On 21.06.2017 17:24, Eno Thereska wrote:
(cc’ing user-list too
pping the usage of ChangeSerde and have "properly"
repartitioned topic. That is just sane IMO
Best Jan
On 22.06.2017 11:54, Eno Thereska wrote:
Note that while I agree with the initial proposal (withKeySerdes, withJoinType,
etc), I don't agree with things like .materiali
files really work well for them.
Best Jan
On 30.06.2017 09:31, Damian Guy wrote:
Thanks Matthias
On Fri, 30 Jun 2017 at 08:05 Matthias J. Sax wrote:
I am just catching up on this thread, so sorry for the long email in
advance... Also, it's to some extend a dump of thoughts and no
ilder to
reduce the overloaded functions as well. WDYT?
Guozhang
On Tue, Jul 4, 2017 at 1:40 AM, Damian Guy wrote:
Hi Jan,
Thanks very much for the input.
On Tue, 4 Jul 2017 at 08:54 Jan Filipiak
wrote:
Hi Damian,
I do see your point of something needs to change. But I fully agree with
e in conf
why not override the specific ones?)
Does this makes sense to people? what pieces should i outline with code
(time is currently sparse :( but I can pull of some smaller examples i
guess)
Best Jan
On 08.07.2017 01:23, Matthias J. Sax wrote:
It's too issues we want to tackle
does? Sorry can't wrap my head
round that just now
heading towards 3am.
The example I provided was
streams.$applicationid.stores.$storename.inmemory = false
streams.$applicationid.stores.$storename.cachesize = 40k
for the configs. The Query Handle thing make sense hopefully.
Best J
If you can
specify your pain more precisely maybe we can work around it.
Best Jan
On 10.07.2017 10:31, Dmitriy Vsekhvalnov wrote:
Guys, let me up this one again. Still looking for comments about
kafka-consumer-groups.sh
tool.
Thank you.
On Fri, Jul 7, 2017 at 3:14 PM, Dmitriy Vsekhvalnov
wro
n tackle the 3 above
problems and especially why i don't think we can win tackiling only
point 1 in the long run.
If anything would need an implementation draft please feel free to ask
me to provide one. Initially the proposal hopefully would get the job
done of just removing clutter.
Looking
;t have control over.
The whole logic about partitioners and what else does not change.
Hope this makes my points more clear.
Best Jan
On 19.07.2017 12:03, Damian Guy wrote:
Hi Jan,
Thanks for your input. Comments inline
On Tue, 18 Jul 2017 at 15:21 Jan Filipiak wrote:
Hi,
1. To many
Gouzhangs 'Buffered' idea
seems ideal here.
please have a look. Looking forward for your opinions.
Best Jan
On 21.06.2017 17:24, Eno Thereska wrote:
(cc’ing user-list too)
Given that we already have StateStoreSuppliers that are configurable using the
fluent-like API, probably i
tors
maintain a Store and provide a ValueGetterSupplier.
Does this makes sense to you?
Best Jan
On 02.08.2017 18:09, Bill Bejeck wrote:
Hi Jan,
Thanks for the effort in putting your thoughts down on paper.
Comparing what I see from your proposal and what is presented in
KIP-182, one of the
nesting and can
split everything into logical chunks of SQL. KTable variables are the CTE of
kafka streams.
One can probably sell this to people :)
Best Jan
Enjoyed your feedback! hope mine makes sense
On 03.08.2017 00:10, Guozhang Wang wrote:
Hello Jan,
Thanks for your proposal. As
s specific questions, can always approach me.
Otherwise I am just going to drink the kool-aid now. :(
Best Jan
On 08.08.2017 20:37, Guozhang Wang wrote:
Hello Jan,
Thanks for your feedback. Trying to explain them a bit more here since I
think there are still a bit mis-communication here:
Here are a fe
different connect string. That should do what you
want instantly
Best Jan
On 16.09.2017 22:51, M. Manna wrote:
Yes I have, I do need to build and run Schema Registry as a pre-requisite
isn't that correct? because the QuickStart seems to start AVRO - without
AVRO you need your own implementati
.
Given the Logsizes your dealing with, I am very confident that this is
your issue.
Best Jan
On 25.10.2017 12:21, Elmar Weber wrote:
Hi,
On 10/25/2017 12:15 PM, Xin Li wrote:
> I think that is a bug, and should be fixed in this task
https://issues.apache.org/jira/browse/KAFKA-6030.
&
discussion and vote on a solution is exactly what is
needed to bring this feauture into kafka-streams. I am looking forward
to everyones opinion!
Please keep the discussion on the mailing list rather than commenting on
the wiki (wiki discussions get unwieldy fast).
Best
Jan
Thanks for the remarks. hope I didn't miss any.
Not even sure if it makes sense to introduce A and B or just stick with
"this ktable", "other ktable"
Thank you
Jan
On 27.10.2017 06:58, Ted Yu wrote:
Do you mind addressing my previous comments ?
http://
Hi,
I probably would recommend you to go for 1 instance. You can bump a few
thread configs to match your hardware better.
Best Jan
On 06.11.2017 12:23, chidigam . wrote:
Hi All,
Let say, I have big machine, which having 120GB RAM, with lot of cores,
and very high disk capacity.
How many
of GB per day for us.
Hope this helps.
Best Jan
On 29.11.2017 15:10, Adrienne Kole wrote:
Hi,
The purpose of this email is to get overall intuition for the future plans
of streams library.
The main question is that, will it be a single threaded application in the
long run and serve
ot;
https://engineering.linkedin.com/blog/2017/08/open-sourcing-kafka-cruise-control
could also handle node failures. But usually this is not necessary. The
hop across the broker is usually just to efficient
to have this kind of fuzz going on.
Hope this can convince you to try it out.
Bes
Hi,
Haven't checked your code. But from what you describe you should be fine.
Upgrading the version might help here and there but should still work
with 0.10
I guess.
Best Jan
On 30.11.2017 19:16, Artur Mrozowski wrote:
Thank you Damian, it was very helpful.
I have implemented my sol
optimisation, but my opinions on it are not to high.
Hope that helps, just keep the questions coming, also check if you might
want to join confluentcommunity on slack.
Could never imaging that something like a insurance can really be
modelled as 4 streams ;)
Best Jan
On 30.11.2017 21:07
Hope this helps
Best Jan
On 03.12.2017 20:27, Dmitry Minkovsky wrote:
This is a pretty stupid question. Mostly likely I should verify these by
observation, but really I want to verify that my understanding of the
documentation is correct:
Suppose I have topic configurations like:
retention.ms
Hi,
two questions. Is your MirrorMaker collocated with the source or the target?
what are the send and receive buffer sizes on the connections that do span
across WAN?
Hope we can get you some help.
Best jan
On 06.12.2017 14:36, Xu, Zhaohui wrote:
Any update on this issue?
We also run
this Store with a say REST or any other RPC interface, to let
applications from outside your JVM query it.
So i would say the blogpost still applies quite well.
Hope this helps
Best Jan
On 07.12.2017 04:59, Peter Figliozzi wrote:
I've written a Streams application which creates a K
Hi Peter,
glad it helped,
these are the preferred ways indeed.
On 07.12.2017 15:58, Peter Figliozzi wrote:
Thanks Jan, super helpful! To summarize (I hope I've got it right), there
are only two ways for external applications to access data derived from a
KTable:
1. Inside the st
HI
brokers still try todo a gracefull shutdown I supose?
It would only shut down if it is not the leader of any partition anymore.
Can you verify: there are other brokers alive that took over leadership?
and the broker in question stepped down as a leader for all partitions?
Best Jan
On
I would encourage you todo so.
I also think its not reasonable behavior
On 13.02.2018 11:28, Wouter Bancken wrote:
We have upgraded our Kafka version as an attempt to solve this issue.
However, the issue is still present in Kafka 1.0.0.
Can I log a bug for this in JIRA?
Wouter
On 5 February 2
among other Kafka related
tools.
Regards
Jan
Sent from my iPhone
> On 19 Apr 2016, at 08:02, Ratha v wrote:
>
> Hi all;
>
> I try to publish/consume my java objects to kafka. I use Avro schema.
>
> My basic program works fine. In my program i use my schema in the pro
You have to allow topic deletion in server.properties first.
delete.topic.enable = true
Regards
Jan
> On 11 May 2016, at 09:48, Snehalata Nagaje
> wrote:
>
>
>
> Hi ,
>
> Can we delete certain topic in kafka?
>
> I have deleted using command
>
>
Hi,
I have a producer question: Is the producer (specifically the normal Java
producer) using the file system in any way?
If it does so, will a producer work after loosing this file system or its
content (for example in a containerization scenario)?
Jan
starving. To solve this issue you need to
increase your partition count.
Regards
Jan
> On 14 Jun 2016, at 13:07, Joris Peeters wrote:
>
> I suppose the consumers would also need to all belong to the same consumer
> group for your expectation to hold. If the three consume
topic as an input again after a restart, or how does it load the
whole table again? Can someone explain the rules to persist or restore a
KTable to or from a changelog?
Best regards
Jan
Hi Ismael,
Unfortunately Java 8 doesn't play nice with FreeBSD. We have seen a lot of JVM
crashes running our 0.9 brokers on Java 8... Java 7 on the other hand is
totally stable.
Until these issues have been addressed, this would cause some serious issues
for us.
Regards
Jan
27;re using a proprietary message format, that's why we don't
have any plans (or capacity) to open source it at the moment.
However builiding that tool was straight forward, it shouldn't take you more
than a day or two to build something similar. Ping me if you need some help.
Regard
overlooked?
Does anybody have a good use case where the timebase index comes in
handy? I made a custom console consumer for me,
that can bisect a log based on time. Its just a quick probabilistic shot
into the log but is sometimes quite useful for some debugging.
Best Jan
ently.
I hope I can make a point and not waste your time.
Best Jan,
hopefully everything makes sense
----
Jan,
Currently, there is no switch to disable the time based index.
There are quite a few use cases of time based index.
1. From KIP-33's wiki, it allows us to do time-based
here be any blogpost about their usecase? or can you share it?
Best Jan
On 24.08.2016 16:47, Jun Rao wrote:
Jan,
Thanks for the reply. I actually wasn't sure what your main concern on
time-based rolling is. Just a couple of clarifications. (1) Time-based
rolling doesn't control how l
le "kafka-way". Can you
suggest on the proposed re-factoring? what are the chance to get it
upstream if I could pull it off? (unlikely)
Thanks for all the effort you put in into listening to my concerns.
highly appreciated!
Best Jan
On 25.08.2016 23:36, Jun Rao wrote:
Jan,
Thanks
t; rollings logs as the broker thinks its millis.
So that would probably have caused us at least one outage if a big
producer had upgraded and done this, IMO likely mistake.
Id just hoped for a more obvious kill-switch, so I didn’t need to bother
that much.
Best Jan
On 29.08.2016 19:36, Jun Ra
Hi Jun,
thanks a lot for the hint, Ill check it out when I get a free minute!
Best Jan
On 07.09.2016 00:35, Jun Rao wrote:
Jan,
For the time rolling issue, Jiangjie has committed a fix (
https://issues.apache.org/jira/browse/KAFKA-4099) to trunk. Perhaps you can
help test out trunk and see
Hi Gourab,
Check this out:
https://github.com/linkedin/Burrow <https://github.com/linkedin/Burrow>
Regards
Jan
> On 29 Sep 2016, at 15:47, Gourab Chowdhury wrote:
>
> I can get the *Lag* of offsets with the following command:-
>
> bin/kafka-run-class.sh kafka.admin
1 - 100 of 159 matches
Mail list logo