Confluent already supports a C client (the famous librdkafka). We are
indeed going to support a C# client, based on rdkafka-dotnet - we are
currently busy modifying the API a bit to fit our taste better :)
On Mon, Dec 5, 2016 at 6:34 PM, Tauzell, Dave
wrote:
> I don't know if any API to stream
I want to set quotas configuration for my Kafka 0.10.1 cluster but I have
some questions.
1. How to set 'user' group? From documentation, 'In a cluster that supports
unauthenticated clients, user principal is a grouping of unauthenticated
users chosen by the broker using a configurable PrincipalBu
I don't know if any API to stream a message. I don't suggest putting lots of
large messages onto Kafka.
As far as documentation I hear that confluent is going to support a C and C#
client so you could try asking questions on the confluent mailing list.
Dave
On Dec 5, 2016, at 17:51, Doyle, Ke
We're beginning to make use of Kafka, and it is encouraging. But there are a
couple of questions I've had a hard time finding answers for.
We're using the rdkafka-dotnet client on the consumer side and it's
straightforward as far as it goes. However, documentation seems to be
scant-the Wiki
I should clarify, that those requests may work, but are not used in any
active code. The integration with the rest of the system is yet to happen.
On Mon, Dec 5, 2016 at 1:45 PM, Apurva Mehta wrote:
> It isn't ready yet. It is part of the work related to
> https://cwiki.apache.org/confluence/dis
It isn't ready yet. It is part of the work related to
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
Thanks,
Apurva
On Mon, Dec 5, 2016 at 11:10 AM, Dmitry Lazurkin wrote:
> Hello.
>
> Are requests CreateTopics and DeleteTopics r
Hi folks,
I try to start the kafka connect in the distribute ways as follows. It has
below error. Standalone mode is fine. It happens on the 3.0.1. and 3.1 version
of confluent kafka. Des anyone know the cause of this error?
Thanks,
Will
security.protocol = PLAINTEXT
internal.key
For most of our clusters, we just use auto topic creation and it’s handled
that way. Periodically we’ll go through and clean up partition counts
across everything if there’s a new high-volume topic. We also have the
ability for people to pre-create topics using a central management system.
For the
Hi,
Initially, we have only one Kafka cluster shared across all teams. But now
this cluster is very close to out of resources (disk space, # of
partitions, etc.). So we are considering adding another Kafka cluster. But
what's the best practice of topic discovery, so that applications know
which cl
Hello.
Are requests CreateTopics and DeleteTopics ready for production usage?
Why TopicCommand doesn't use CreateTopics / DeleteTopics?
Thanks.
Hi Mathieu,
if you are happy to share your code privately it would help. At the moment
i'm struggling to see how we can get into this situation, so i think your
topology would be useful.
Thanks,
Damian
On Mon, 5 Dec 2016 at 16:34 Mathieu Fenniak
wrote:
> Hi Damian,
>
> Yes... I can see how mo
By default, Kafka Streams uses *event-time* and not *system-time* to
assign records to windows. That's why you observe this.
Please have look here and follow up if you have further question:
http://docs.confluent.io/current/streams/concepts.html#time
-Matthias
On 12/5/16 8:42 AM, Jon Yeargers
Im creating aggregated values as follows:
kStream.groupByKey.aggregate( ... ,TimeWindows.of(20 * 60 *
1000L).advanceBy(60 * 1000L), ...);
As I process each aggregate Im storing the current system clock time in the
aggregated record.
Im watching the aggregates come through with a subsequent '.for
Hi Damian,
Yes... I can see how most of the stack trace is rather meaningless.
Unfortunately I don't have a minimal test case, and I don't want to burden
you by dumping the entire application. (I could share it privately, if
you'd like.)
Based upon the stack trace, the relevant pieces involved a
Thomas,
I’m always running ZK separate from Kafka. Mind you, no multi-region, just
multi-AZ.
I have never had issues with default settings. It’s possible that once your
cluster gets bigger, you may have to increase the timeouts. Never had a
problem with cluster size of ~20 brokers.
Happy to hear f
Thanks for the reply, Radek. So you're running with 6s then? I'm
surprised, I thought people were generally increasing this value when
running in EC2. Can I ask if you folks are running ZK on the same
instances as your Kafka brokers? We do, and yes we know it's somewhat
frowned upon.
-Tommy
On Mo
Hi Thomas,
Defaults are good for sure. Never had a problem with default timeouts in
AWS.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 5, 2016 at 4:58:41 PM, Thomas Becker (tobec...@tivo.com) wrote:
I know several folks are running Kafka in AWS, can someone give me an
idea
I know several folks are running Kafka in AWS, can someone give me an
idea of what sort of values you're using for ZK session timeouts?
--
Tommy Becker
Senior Software Engineer
O +1 919.460.4747
tivo.com
This email and any attachments may co
Hi Mathieu,
I'm trying to make sense of the rather long stack trace in the gist you
provided. Can you possibly share your streams topology with us?
Thanks,
Damian
On Mon, 5 Dec 2016 at 14:14 Mathieu Fenniak
wrote:
> Hi Eno,
>
> This exception occurred w/ trunk @ e43bbce (current as-of Saturday
FWIW - solved this by calling '.poll()' with 'enable.auto.commit' set to
false.
On Mon, Dec 5, 2016 at 5:53 AM, Mathieu Fenniak <
mathieu.fenn...@replicon.com> wrote:
> Hi Jon,
>
> Here are some lag monitoring options that are external to the consumer
> application itself; I don't know if these
Hi Eno,
This exception occurred w/ trunk @ e43bbce (current as-of Saturday). I was
bit by KAFKA-4311 (I believe) when trying to upgrade to 0.10.1.0, so with
that issue now resolved I thought I'd check trunk out to see if any other
issues remain.
Mathieu
On Sun, Dec 4, 2016 at 12:37 AM, Eno The
Hi,
In my application I have replicated internal changelog topics.
>From time to time I get this exception and I am not able to figure out why.
[2016-12-05 11:05:10,635] ERROR Error sending record to topic
test-stream-key-table-changelog
(org.apache.kafka.streams.processor.internals.RecordCollec
Hi Jon,
Here are some lag monitoring options that are external to the consumer
application itself; I don't know if these will be appropriate for you. You
can use a command-line tool like kafka-consumer-groups.sh to monitor
consumer group lag externally (
http://kafka.apache.org/documentation.html
Is there a way to get updated consumer position(s) without subscribing to a
topic? I can achieve this by continually closing / reopening a
KafkaConsumer object but this is problematic as it often times out.
Im getting consumer lag from a combination of
(start) .seekToEnd() (and then) .position()
Hi,
We are using mirrormaker to mirror topics from one cluster to another, and I
wanted to get some advice from the community on how people are doing mirroring.
In particular, how are people dealing with topic creation?
Do you turn on auto-topic creation in your destination clusters
(auto.crea
25 matches
Mail list logo