> > beyond that for now.
> >
> > Note that in KIP-4 we are trying to introduce the admin client for such
> > tasks such as create / delete topics, it has added such requests in the
> > upcoming 0.10.1.0 release, but the full implementation is yet to be
> > comple
> branch the source stream into multiple ones based on the content, with each
> branched stream to a different topic.
>
>
> Guozhang
>
>
> On Wed, Oct 5, 2016 at 7:48 AM, Gary Ogden wrote:
>
> > Guozhang. I was just looking at the source for this, and it looks like
ess to the ProcessorContext interface, which doesn't expose the
Supplier.
On 5 October 2016 at 09:42, Gary Ogden wrote:
> What if we were to use kafka connect instead of streams? Does it have the
> ability to specify partitions, rf, segment size etc?
>
> On 5 October 2016 at 09:42, Gary Ogden
What if we were to use kafka connect instead of streams? Does it have the
ability to specify partitions, rf, segment size etc?
On 5 October 2016 at 09:42, Gary Ogden wrote:
> Thanks Guozhang.
>
> So there's no way we could also use InternalTopicManager to specify the
> number
u then I think it should be fine.
>
>
> Guozhang
>
>
>
> On Tue, Oct 4, 2016 at 12:51 PM, Gary Ogden wrote:
>
> > Is it possible, in a kafka streaming job, to write to another topic based
> > on the key in the messages?
> >
> > For example, say
Sorry. I responded to the wrong message
On 5 October 2016 at 09:40, Gary Ogden wrote:
> Thanks Guozhang.
>
> So there's no way we could also use InternalTopicManager to specify the
> number of partitions and RF?
>
> https://github.com/apache/kafka/blob/0.10.1/strea
I works please read the corresponding sections on the web
> docs:
>
> http://docs.confluent.io/3.0.1/streams/developer-guide.html#processor-api
>
>
> Guozhang
>
> On Mon, Oct 3, 2016 at 6:51 AM, Gary Ogden wrote:
>
> > I have a use case, and I'm wondering if it's
Is it possible, in a kafka streaming job, to write to another topic based
on the key in the messages?
For example, say the message is:
123456#{"id":56789,"type":1}
where the key is 123456, # is the delimeter, and the {} is the json data.
And I want to push the json data to another topic that wi
What if topics are created or deleted after the application has started?
Will they be added/removed automatically, or do we need to restart the
application to pick up the changes?
On 1 October 2016 at 04:42, Damian Guy wrote:
> That is correct.
>
> On Fri, 30 Sep 2016 at 18:00 Gary Ogd
I have a use case, and I'm wondering if it's possible to do this with Kafka.
Let's say we will have customers that will be uploading JSON to our system,
but that JSON layout will be different between each customer. They are able
to define the schema of the JSON being uploaded.
They will then be a
that
regex?
If so, that could be useful.
Gary
On 30 September 2016 at 13:35, Damian Guy wrote:
> Hi Gary,
>
> In the upcoming 0.10.1 release you can do regex subscription - will that
> help?
>
> Thanks,
> Damian
>
> On Fri, 30 Sep 2016 at 14:57 Gary Ogden wrote:
>
Is it possible to use the topic filter whitelist within a Kafka Streaming
application? Or can it only be done in a consumer job?
metadata.max.age.ms=30
>
>
> On Thu, Feb 26, 2015 at 4:47 AM, Gary Ogden wrote:
>
> > I was actually referring to the metadata fetch. Sorry I should have been
> > more descriptive. I know we can decrease the metadata.fetch.timeout.ms
> > setting to be a lot lower, but it&
the send() call will throw a BufferExhaustedException
> which, in your case, can be caught and ignore and allow the message to drop
> on the floor.
>
> Guozhang
>
>
>
> On Wed, Feb 25, 2015 at 5:08 AM, Gary Ogden wrote:
>
> > Say the entire kafka cluster is down and ther
Say the entire kafka cluster is down and there's no brokers to connect to.
Is it possible to use the java producer send method and not block until
there's a timeout? Is it as simple as registering a callback method?
We need the ability for our application to not have any kind of delay when
sendin
've always found
> there's something that we're doing that is affecting the overall throughput
> of the systems, be it needing to play with the number of partitions,
> adjusting batch size, resizing the hadoop cluster to meet increased need
> there.
>
> On Thu, Feb 12,
; the same information throughout the day, it lets us maintain a system where
> we have near-real-time access to most of the data we're ingesting.
>
> This certainly is something we've had to tweak in terms of the numbers of
> consumers / partitions and batch sizes to get to
ing HBase you can use Pig jobs that would read only
> the records created between specific timestamps.
>
> David
>
> On Thu, Feb 12, 2015 at 7:44 AM, Gary Ogden wrote:
>
> > So it's not possible to have 1 topic with 1 partition and many consumers
> of
> > tha
f you have multiple partitions
> (say 3 for example), then you can fire up 3 consumer instances under the
> same consumer group, and each will only consume 1 partition's data. if
> order in each partition matters, then you need to do some work on the
> producer side.Hope this helpsEdwi
I'm trying to understand how the partition key works and whether I need to
specify a partition key for my topics or not. What happens if I don't
specify a PK and I have more than one consumer that wants all messages in a
topic for a certain period of time? Will those consumers get all the
message
20 matches
Mail list logo