) ... and then no progress is made.
Messages are constantly being published to Kafka, and there is a lot left
to process, so why would the iterator block indefinitely? (I have to
restart the Java app to consume more messages)
Thanks for any help with this!
Josh
Hi,
Is there a way to monitor the rate of rates to a particular topic? I wish
to monitor the frequency of incoming tuples in order to consume from the
topic in particular ways depending on the incoming write throughput.
Thanks,
Josh
Interested in the total number of tuples written per millisecond per topic.
On Tue, Oct 7, 2014 at 3:56 PM, Josh J wrote:
> Hi,
>
> Is there a way to monitor the rate of rates to a particular topic? I wish
> to monitor the frequency of incoming tuples in order to consume from th
014 at 4:35 PM, Neil Harkins wrote:
> > On Tue, Oct 7, 2014 at 3:56 PM, Josh J wrote:
> >> Is there a way to monitor the rate of rates to a particular topic? I
> wish
> >> to monitor the frequency of incoming tuples in order to consume from the
> >> topic
hi,
How do I read N items from a topic? I also would like to do this for a
consumer group, so that each consumer can specify an N number of tuples to
read, and each consumer reads distinct tuples.
Thanks,
Josh
hi
Is it possible to have iOS and android to run the code needed for Kafka
producers ? I want to have mobile clients connect to Kafka broker
Thanks,
Josh
, Harsha wrote:
>
> Hi Josh,
> Why not have Rest api service running where you post messages
> from your mobile clients. Rest api can run kafka producers
> accepting these messages pushing it into kafka brokers. Here
> is an example where we did s
t; Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ****/
>
> On Mon, Oct 20, 2014 at 1:09 PM, Josh J wrote:
>
> > Thanks for the tip. I would like
Hi,
Does anyone have a serializable kafka producer pool that uses the
KafkaProducer.crateProducer() method? I'm trying to use the Spark borrow
feature to cache the kafka producers.
Thanks,
Josh
h I need to be able to precisely control the
amount of load. For example, 1000 records per second.
Thanks,
Josh
Bump... Looking for a Kafka Producer Object pool to use in Spark Streaming
inside foreachPartition
On Wed, Jan 14, 2015 at 8:40 PM, Josh J wrote:
> Hi,
>
> Does anyone have a serializable kafka producer pool that uses the
> KafkaProducer.crateProducer() method? I'm trying
archives and couldn’t find this response.
Thanks,
Josh
#x27;t think of. We will need to think this
through, as you say avoid resending other messages in a batch if one is
failed. I wonder if we might also manage this on the consumer side too
with idempotency. Thanks for raising this!
Josh
On Tue, Mar 3, 2015 at 6:08 PM, Xiao wrote:
> Hey Jo
Thanks Guozhang. Are there JIRAs created for tracking idempotent producer
or transactional messaging features? Maybe we can pick up some of the
tasks to expedite the release?
On Fri, Mar 6, 2015 at 1:53 AM, Guozhang Wang wrote:
> Josh,
>
> Dedupping on the consumer side may be tri
bigger than your tradition EDA messages.
4. It seems like we
can do a lot of this using SOA (we already have an ESB than can do
transformations to address consumers expecting an older version of the data).
Any insight is appreciated.
Thanks,
Josh
ages that it will promptly drop but I don't want to create a new
"response" or "recommendation" topic because then I feel like I am tightly
coupling the message to the functionality and in the future different systems
may want to consume those messages as well.
Does that
opic partitions but that seems more like a way to
distribute the workload in terms of storing the messages and not for the
message selection scenario I am describing if I understood correctly.
From: Timothy Chen
To: users@kafka.apache.org; Josh Foure
Se
o do new things with all this data. The main
downside seems to be that most of the consumers are processing millions of
messages that they have no interest in. Do you think that the benefits
outweigh the cons? Is there a better way to achieve similar results?
Thanks
Josh
__
hould
increase your number of partitions so there are as many as there are consumers
in this case so each will get 1/5 of the messages (assuming the messages are
evenly distributed across the partitions).
Is that what you were looking for? Can someone confirm that what I stated is
ac
at is proposed. Also please let me know if my
question/problem is not clear.
Thanks,
Josh
DISCLAIMER: The information transmitted is intended only for the person or
entity to which it is addressed and may contain confidential and/or privileged
material. Any rev
to decrypt if we don't
re-encrypt them also.
Josh
From: Jens Rantil
Sent: Tuesday, January 19, 2016 11:48 PM
To: users@kafka.apache.org
Cc: users@kafka.apache.org
Subject: Re: security: encryption at rest and key rotation idea
Hi Josh,
Kafka wil
dec and JMX for cleaner thread invocation and things will be
taken care of transparently?
Any interesting from other users of this proposal?
Thanks,
Josh
From: Jim Hoagland
Sent: Wednesday, January 20, 2016 11:00 AM
To: users@kafka.apache.org; Josh
message to the same
partition id (hopeful it is always true, haven't verified).
BTW, any concern with codec approach apart from customization/make codec
pluggable?
Thanks,
Josh
From: Jim Hoagland
Sent: Thursday, January 21, 2016 1:02 PM
To:
Is there a listener based API for consumption instead os a blocking poll?
Kind regards
Josh
Thank you, Michal.
That answers all my questions, many thanks.
Josh
On Thu, Oct 5, 2017 at 1:21 PM, Michal Michalski <
michal.michal...@zalando.ie> wrote:
> Hi Josh,
>
> 1. I don't know for sure (haven't seen the code that does it), but it's
> probably the mos
Michal,
You mentioned topics are only dynamically created with producers. Does that
mean if a consumer starts on a non-existent topic, it throws an error?
Kind regards
Meeraj
On Thu, Oct 5, 2017 at 9:20 PM, Josh Maidana wrote:
> Thank you, Michal.
>
> That answers all my questi
Hello
We are integrating KAFKA with an AKKA system written in Scala. Is there a
Scala API available for KAFKA? Is the best option to use AKKA KAFKA Stream?
--
Kind regards
*Josh Meraj Maidana*
to support data-driven windowing. I've also considered using a
custom Processor to perform the aggregation, but don't see how to take an
output-stream from a Processor and continue to work with it. This area of
the system is undocumented, so I'm not sure how to proceed.
Am I missing something? Do you have any suggestions?
-josh
ods that receive a type-safe
interface for 'context.forward'. (I have this small change drafted up
within the kafka trunk sources, and could submit a PR if the maintainers
are interested?)
Thanks,
-josh
On Wed, Mar 23, 2016 at 11:02 AM Guozhang Wang wrote:
> Hello Josh,
>
> As o
s what I had in mind for a PR we discussed earlier (for modifying
the Transformer API), but the scope expanded beyond what I felt comfortable
submitting without discussion, and I had to prioritize other efforts.
Regardless, I could get a WIP branch pushed to github later today to
illustrate if you'
Hi Guozhang,
I'll reply to your points in-line below:
On Tue, Apr 5, 2016 at 10:23 AM Guozhang Wang wrote:
> Hi Josh,
>
> I think there are a few issues that we want to resolve here, which could be
> orthogonal to each other.
>
> 1) one-to-many mapping in transform() fu
Yes, sounds good, Guozhang, thanks. I'll create a jira today.
-josh
On Thu, Apr 14, 2016, 1:37 PM Guozhang Wang wrote:
> Hi Josh,
>
> As we chatted offline, would you like to summarize your proposed Transform
> APIs in a separate JIRA so we can continue our discussions th
poll(500) calls where the first call pretty much always returns nothing,
but the second will return some records (though not always all available).
Why does this happen and what’s the solution?
Josh
To use the new KafkaConsumer.offsetsForTimes(...) API does the server also
need to be upgraded from 0.10.0.1?
Josh
s.internal' in the source
cluster. So, I was wondering how would I get these new translated offsets
to migrate the consumer group back to the source cluster?
Please let me know if my question was unclear or if you require further
clarification! Appreciate the help.
Thanks,
Josh
data
for the source topic AND the remote topic as a pair as opposed to having to
explicitly replicate the remote topic back to the source cluster just to
have the checkpoints emitted upstream?
Josh
On Wed, Aug 19, 2020 at 6:16 AM Ryanne Dolan wrote:
> Josh, yes it's possible to mig
the
source cluster by adding it to the topic whitelist, would I also need to
update the topic blacklist and remove ".*\.replica" (since the blacklists
take precedence over the whitelists)?
Josh
On Wed, Aug 19, 2020 at 11:46 AM Josh C wrote:
> Thanks for the clarification Ryanne. I
take precedence
over the whitelists), but that doesn't seem to be doing much either? Is
there something else I should be aware of in the mm2.properties file?
Appreciate all your help!
Josh
On Wed, Aug 19, 2020 at 12:55 PM Ryanne Dolan wrote:
> Josh, if you have two clusters with bidire
341Isr: 12452,5341
Thanks,
Josh
ocal IP. I ended up using the eth0 interface on each
kafka docker container.
Thanks,
Josh
On Thu, Jul 17, 2014 at 7:03 PM, Guozhang Wang wrote:
> Hi Josh,
>
> What is the Kafka version you are using? And can you describe the steps to
> re-produce this issue?
>
> Guozhang
>
Hi,
Suppose I have N partitions. I would like to have X different consumer
threads ( X < N) read from a specified set of partitions. How can I achieve
this?
Thanks,
Josh
> You can see an example of using the SimpleConsumer here
<
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
>
Any suggestions on where in the code to modify the high level producer to
support reading from specific partitions ?
Thanks,
Josh
.
On Thu, Aug
Is it possible to modify and use the high level consumer so that I can
ignore processing certain partitions?
On Mon, Aug 18, 2014 at 5:07 PM, Sharninder wrote:
> On Mon, Aug 18, 2014 at 7:27 PM, Josh J wrote:
>
> > > You can see an example of using the Simp
on of
the number of buckets). I can then do the same function on the consumer
when it reads the key. I'm essentially implementing consumer sliding
window. Any suggestions or tips on where I would implement reading the
message key?
Thanks,
Josh
On Mon, Aug 18, 2014 at 6:43 PM, Jonathan Weeks
box/blob/92318c6d3f2533bbadb253c59a201e4e70f72ad2/src/main/java/org/n3r/sandbox/kafka/ConsumerGroupExample.java>.
Assuming that the number of threads and partitions is fixed.
Thanks,
Josh
runWith(Sink.ignore)
}
--
Kind regards
*Josh Meraj Maidana*
> Hey Josh,
>
> Consumption from non-existent topic will end up with
> "LEADER_NOT_AVAILABLE".
>
> However (!) I just tested it locally (Kafka 0.11) and it seems like
> consuming from a topic that doesn't exist with auto.create.topics.enable
> set to true
47 matches
Mail list logo