.map((key, value) => (getAccessLogKey(value),
> toBcAccessLog(value)))
> >
> > filteredStream
> > .through("accesslog-" + testRun,
> >stringSerializer,
> >bcAccessLogSerializer,
> >stringDeserializer,
> >bcAccessLogDeserializer)
> > .mapValues(value => {
> >println("filteredSourceStream value: " + value)
> >value
> > })
> > .process(new CdcProcessorSupplier)
> >
> > val stream: KafkaStreams = new KafkaStreams(builder, streamingConfig)
> > println("\nstart filtering BcAccessLog test run: " + testRun + "\n")
> > stream.start()
> > }
> >
> > Regards,
> > Fred Patton
> >
>
>
>
> --
> -- Guozhang
>
--
Liquan Pei
Software Engineer, Confluent Inc
containers using docker kill or docker restart command. When the
> >> container is up again a rebalance happens and sometimes few tasks don't
> >> consume messages anymore even thought the onPartitionAssigned functions
> >> says that they are handling a partition
t use offsets, but there doesn't appear to be a way to
> disable committing offsets.
>
> -Peter
>
--
Liquan Pei
Software Engineer, Confluent Inc
ng. When the failed worker is up again the tasks are
> > distributed correctly among the two workers but some tasks don't get new
> > messages anymore. How can I check that actually all the input topic
> > partitions are correctly reassigned?
> >
> > Matteo
> &g
tion I read that the connector save the offset
> of the tasks in a special topic in Kafka (the one specified via
> offset.storage.topic) but it is empty even though the connector process
> messages. Is it normal?
>
> Thanks,
> Matteo
>
--
Liquan Pei
Software Engineer, Confluent Inc
tml
>
> * Protocol:
> http://kafka.apache.org/0100/protocol.html
>
> /**
>
> Thanks,
>
> Gwen
>
--
Liquan Pei
Software Engineer, Confluent Inc
egards,
>
> Balthasar Schopman
> DevOps Software Engineer
> LeaseWeb Technologies B.V.
>
> T: +31 20 316 0232
> M:
> E: b.schop...@tech.leaseweb.com
> W: http://www.leaseweb.com
>
> Luttenbergweg 8, 1101 EC Amsterdam, Netherlands
>
>
>
--
Liquan Pei
Software Engineer, Confluent Inc
per consumer (via consumer configs) and in addition to
> disabling auto commit, these changes have improved noticeably the CPU usage.
>
> Ideally, what would be a better value for the heart beat interval that
> doesn't unnecessary flood these messages and cause the broker to continuous
> process them?
>
> -Jaikiran
>
--
Liquan Pei
Software Engineer, Confluent Inc
]}].
>
>
> And when this happens, basically all these partitions with zero latest
> offset fail to get new data. After we restart the controller, everything
> goes back normally.
>
> Do you see the similar issue before and any idea about the root cause? What
> other information do you suggest to collect to get to the root cause?
>
> Thanks,
> Qi
>
--
Liquan Pei
Software Engineer, Confluent Inc
.
>
>
> I can provide detailed reproduction steps if needs be. The key parameters
> are that there must be at least 2 brokers involved, and the max fetch size
> should be reduced, to limit the size of the fetch batches.
>
>
> If anyone can verify what I'm seeing I'll
Hi
There is a command line option new-consumer controls which consumer to use.
Thanks,
Liquan
On Mon, Apr 25, 2016 at 1:07 PM, Ramanan, Buvana (Nokia - US) <
buvana.rama...@nokia.com> wrote:
> Does kafka-console-consumer.sh utilize New Consumer API or Old Consumer
> API?
>
>
nk you
> florin
>
--
Liquan Pei
Software Engineer, Confluent Inc
itive Guide (
> http://shop.oreilly.com/product/0636920044123.do)
>
> On Thu, Apr 21, 2016 at 9:38 PM, Mudit Agarwal >
> wrote:
>
> > Hi,
> > Any recommendations for any online guide/link on managing/Administration
> > of kafka cluster.
> > Thanks,Mudit
>
--
Liquan Pei
Software Engineer, Confluent Inc
message i received. I would like to
> get the avro schema from the received byte and would like to use that for
> decoding. Is that right? If so, how can i retrieve from the received
> object?
> Or is there any better approach?
>
> Thanks.
> --
> -Ratha
> http://vvrat
etsBefore() method.
>
> --
> Best regards,
> Marko
> www.kafkatool.com
>
>
--
Liquan Pei
Software Engineer, Confluent Inc
control
> the length of time a lock on a TopicPartition can be held
> # by the coordinator broker.
> session.timeout.ms=18
> request.timeout.ms=190000
> consumer.session.timeout.ms=18
> consumer.request.timeout.ms=19
>
--
Liquan Pei
Software Engineer, Confluent Inc
TLSv1]
> ssl.keystore.location = null
> heartbeat.interval.ms = 3000
> auto.commit.interval.ms = 5000
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = IbmX509
> metrics.sample.window.ms = 3
> fetch.min.bytes = 1
> send.buffer.bytes = 131072
> auto.offset.reset = latest
>
--
Liquan Pei
Software Engineer, Confluent Inc
threads, if i dont use kafka's
> > automatic consumer group abstraction.
> >
> > Thanks
> > Pradeep
> >
> > On Sat, Apr 9, 2016 at 3:12 AM, Liquan Pei wrote:
> >
> > > Hi Pradeep,
> > >
> > > Can you try to set enable.auto.comm
based Kafka Consumer ?
> Once I reset one of my consumers to zero, do i have to do offset management
> myself for other consumer threads or does kafka automatically lower the
> offset to the first threads read offset ?
>
> Any information / material pointing to the solution are hi
nts please. Be gentle!
>
> https://github.com/andrewstevenson/stream-reactor/tree/master/kafka-connect
>
> Regards
>
> Andrew
>
--
Liquan Pei
Department of Physics
University of Massachusetts Amherst
ct.runtime.SourceTaskOffsetCommitter:112)
> > >
> > > Is any other configuration required?
> > >
> > > Thanks,
> > > Shiti
> > >
> >
> >
> >
> > --
> > *Alex Loddengaard | **Solutions Architect | Confluent*
> > *Download Apache Kafka and Confluent Platform: www.confluent.io/download
> > <http://www.confluent.io/download>*
> >
>
--
Liquan Pei
Department of Physics
University of Massachusetts Amherst
vers which are up?
> Kafka cluster doesn't handle this gracefully?
>
> Thanks,
> Fang
>
--
Liquan Pei
Department of Physics
University of Massachusetts Amherst
oing to change the key. Stupid mistake.
>
> However, just out of anxiety, I want to know whether we can turn off
> writing the key to the broker. Any configuration I can change to achieve
> this?
>
> -Thanks,
> Mohit Kathuria
>
--
Liquan Pei
Department of Physics
University of Massachusetts Amherst
23 matches
Mail list logo