Hi Steven,
Speaking only for myself, I agree with you. I think these settings/tweaks
are the easiest short term way to get some proper non-blocking behavior.
Long term, it seems like having a secondary queue in the client to hold raw
messages until meta is available and then start blocking or dro
" preinitialize.metadata=true/false" can help to certain extent. if the
kafka cluster is down, then metadata won't be available for a long time
(not just the first msg). so to be safe, we have to set "
metadata.fetch.timeout.ms=1" to fail fast as Paul mentioned. I can also
echo Jay's comment that o
Hi Paul,
I have faced similar issue, which you have faced. Our use case was bit
different and we needed to aggregate events and publish to same partition
for same topic. Occasionally, I have run into blocked application threads
(not because of metadata but sync block for each batch). When you
FYI, here is the ticket I opened for this improvement:
https://issues.apache.org/jira/browse/KAFKA-1835
Feel free to add feedback on if it meets your use case and if not how
things could.
This should make the blocking behavior explicit as long as you know all
your topics up front. Ideally a separ
I don't think a separate queue will be a very simple solution to implement.
Could you describe your use case a little bit more. It does seem to me that
as long as the metadata fetch happens only once and the blocking has a
tight time bound this should be okay in any use case I can imagine. And, of
+1. it should be truly async in all cases.
I understand some challenges that Jay listed in the other thread. But we
need a solution nonetheless. e.g. can we maintain a separate
list/queue/buffer for pending messages without metadata.
On Tue, Dec 23, 2014 at 12:57 PM, John Boardman
wrote:
> I wa
I double posted by accident, sorry. Have another thread discussing this.
Thanks!
On Dec 22, 2014 11:21 AM, "Jun Rao" wrote:
> Yes, that's a potential issue. Perhaps we just need to have a lower default
> value for metadata.fetch.timeout.ms ?
>
> Thanks,
>
> Jun
>
> On Wed, Dec 17, 2014 at 11:10 P
Yes, that's a potential issue. Perhaps we just need to have a lower default
value for metadata.fetch.timeout.ms ?
Thanks,
Jun
On Wed, Dec 17, 2014 at 11:10 PM, Paul Pearcy
wrote:
> Heya,
> Playing around with the 0.8.2-beta producer client. One of my test cases
> is to ensure producers can d
FYI, I bumped server to 0.8.2-beta and I don't hit the basic failure I
mentioned above, which is great.
I haven't been able to find confirmation in the docs, but from past
conversation(
http://mail-archives.apache.org/mod_mbox/kafka-users/201408.mbox/%3c20140829174552.ga30...@jkoshy-ld.linkedin.bi
Sounds good.
Yes, I'd want a guarantee that every future I get will always return the
recordmeta or an exception eventually.
Running into a similar issue with futures never returning with a pretty
straightforward case:
- Healthy producer/server setup
- Stop the server
- Send a message
- Call get
Yeah if you want to file and JIRA and post a patch for a new option its
possible others would want it. Maybe something like
pre.initialize.topics=x,y,z
pre.initialize.timeout=x
The metadata fetch timeout is a bug...that behavior is inherited from
Object.wait which defines zero to mean infinite
Hi Jay,
I have implemented a wrapper around the producer to behave like I want it
to. Where it diverges from current 0.8.2 producer is that it accepts three
new inputs:
- A list of expected topics
- A timeout value to init meta for those topics during producer creationg
- An option to blow up if
Hey Paul,
I agree we should document this better.
We allow and encourage using partitions to semantically distribute data. So
unfortunately we can't just arbitrarily assign a partition (say 0) as that
would actually give incorrect answers for any consumer that made use of the
partitioning. It is
Hi Jay,
Many thanks for the info. All that makes sense, but from an API
standpoint when something is labelled async and returns a Future, this will
be misconstrued and developers will place async sends in critical client
facing request/response pathways of code that should never block. If the
app
Hey Paul,
Here are the constraints:
1. We wanted the storage of messages to be in their compact binary form so
we could bound memory usage. This implies partitioning prior to enqueue.
And as you note partitioning requires having metadata (even stale metadata)
about topics.
2. We wanted to avoid pr
15 matches
Mail list logo