Hi,
I'm trying to use the Confluent JDBC Sink as Sri is doing but without a schema.
I do not want to write "schema" + "payload" for each record as my records are
all for the same table and the schema is not going to change (this is a very
simple project)
Thanks
Enrico
Il giorno lun, 19/09/20
Hi,
I'm running Kafka launching KafkaServerStartable inside my JVM (I'm using
version 0.10.0.0).
I'm accessing the internal KafkaServer field using reflection
Field serverField = kafkaServer.getClass().getDeclaredField("server");
serverField.setAccessible(true);
KafkaServer server = (KafkaServer)
ges ( using log compaction to save space ). You'd need to think
about all the edge cases and race conditions.
Dave
On Apr 29, 2016, at 03:32, Enrico Olivelli - Diennea
mailto:enrico.olive...@diennea.com>> wrote:
Hi,
I would like to use Kafka as transaction log in order to suppor
Hi,
I would like to use Kafka as transaction log in order to support a case of
replicated state machine, but actually (using 0.9.x) there is a feature I would
like to have.
I'm using Apache BookKeeper and this feature (fencing) is native, but I have
some cases of customers which already use Kafk
> On Wed, Dec 23, 2015 at 1:08 AM, Enrico Olivelli - Diennea <
> enrico.olive...@diennea.com> wrote:
>
> > Hi,
> > I'm running a brand new Kafka cluster (version 0.9.0.0). During my
> > tests
> I
> > noticed this error at Consumer.partitionsFor during a
Hi,
I'm running a brand new Kafka cluster (version 0.9.0.0). During my tests I
noticed this error at Consumer.partitionsFor during a full cluster restart.
My DEV cluster is made of 4 brokers
Maybe I can workaround the error doing sanity checks on cluster /using my
platform middleware tools) befo
Hi,
I would like to start using Kafka, can I start from 0.9 or is it better to
develop on 0.8.2.1 and than migrate to 0.9 ?
My plans are to be in production by september
Will a 0.8.2.1 client (producer/consumer) be able to talk to 0.9 brokers ?
Is there any public maven artifact for Kafka 0.9 ?