Great, thanks.
On Tue, Sep 13, 2016 at 5:35 PM, Shikhar Bhushan
wrote:
> Good point about size-based.
>
> I created a JIRA to track this feature:
> https://issues.apache.org/jira/browse/KAFKA-4161
>
> On Tue, Sep 13, 2016 at 4:19 PM Dean Arnold wrote:
>
> > Yes, usi
rval.ms". Both options could
> be configured but whichever happens first would reset the other.
>
> What do you think?
>
> Best,
>
> Shikhar
>
> On Fri, Sep 9, 2016 at 9:55 AM Dean Arnold wrote:
>
> > I have a need for volume based commits in a few sin
I have a need for volume based commits in a few sink connectors, and the
current interval-only based commit strategy creates some headaches. After
skimming the code, it appears that an alternate put() method that returned
a Map might be used to allow a sink connector to keep
Kafka up to date wrt co
I'm looking for a comprehensive example for deploying a new connector
plugin into an existing Kafka cluster. Is there any standard solution for
distributing a connector jar across nodes, and then starting the connector
? Or is it a manual copy process (e.g., via pdcp), and then run the Connect
REST
btw, it appears the missing msgs are at the end of the CSV file, so maybe
the producer doesn't properly flush when it gets EOF on stdin ?
On Wed, Jun 15, 2016 at 11:21 AM, Dean Arnold wrote:
> I'm seeing similar issues with 0.9.0.1.
>
> I'm feeding CSV records (65536 tot
I'm seeing similar issues with 0.9.0.1.
I'm feeding CSV records (65536 total, 1 record per msg) to the console
producer, which are consumed via a sink connector (using connect-standalone
and a single partition). The sink occasionally reports flushing less than
65536 msgs via the sink flush(). Rest
Have you tried either of the SinkTaskContext.offset() methods ?
https://kafka.apache.org/0100/javadoc/org/apache/kafka/connect/sink/SinkTaskContext.html
On Tue, May 31, 2016 at 8:43 AM, Jack Lund
wrote:
> I'm trying to use the Connector API to write data to a backing store (HDFS
> for now, but
I need to run an external filter program from a SinkTask. Is there anything
that might break if I fork/exec in the start() method, and forward the data
thru pipes ?
TIA,
Dean
xiao wrote:
> You can use the built-in mirror maker to mirror data from one Kafka to the
> other. http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
>
> On Thu, 5 May 2016 at 10:47 Dean Arnold wrote:
>
> > I'm developing a Streams plugin for Kafka 0.1
I'm developing a Streams plugin for Kafka 0.10, to be run in a dev sandbox,
but pull data from a production 0.9 Kafka deployment. Is there a source
connector that can be used from the 0.10 sandbox to connect to the 0.9
cluster ? Given the number of changes/features in 0.10, such a connector
would
10 matches
Mail list logo