Actually suppress doesn't matter, it happens later in the code, I've also
tried to remove that and add a grace period to the window function but the
issue persists.
--
Alessandro Tagliapietra
On Tue, Jul 16, 2019 at 10:17 PM Alessandro Tagliapietra <
tagliapietra.alessan...@gmail.com> wrote:
> t
transformValuesHello everyone,
I've an issue trying to window data. I'm sending on a topic the same exact
data for 2 different keys, however key number 1 is acting properly, key
number 2 isn't.
As you can see here
https://gist.github.com/alex88/dd68a0ce4ae46c37edfc7492b6e16bc8#file-gistfile1-txt
Hello,
I have Kafka cluster deployed into Kubernetes. And I have several
producers/consumers that are deployed in the same kubernetes cluster.
We use TLS between Kafka Brokers and clients.
I noticed that in case if users have the wrong configuration and can't
properly SSL/TLS handshake they are p
Hello Ashok
Adding to what Sophie wrote, if you use a custom RocksDBConfigSetter then
override the BlockBasedTableConfig like following and call
options.setTableFormatConfig(tableConfig)
at the end.
BlockBasedTableConfig tableConfig = (BlockBasedTableConfig)
options.tableFormatConfig();
tableConf
Hi Ashok,
1) RocksDB uses memory in four ways, one of which (iterators) *should* be
negligible -- however if you have a very large number of them open at any
one time, they can consume a lot of memory (until they are closed). If you
are opening many iterators throughout the day, consider closing t
David, I'd look first at ways to speed up the processing downstream of the
consumer, i.e. whatever logic you have writing to HDFS, and in particular
to reduce blocking there, as that is more likely to be the bottleneck than
the consumer itself. Some ideas (that I've had success with):
- turn off a
Hi,
In our streaming instance, the internal caching has been disabled and RocksDB
caching has been enabled, with the override as shown below. Although the heap
is restricted to 36GB, the memory utilization is going over 100GB in a week and
eventually runs out of memory. As part of the profili
Hi list,
I have a custom kafka consumer app dumping data from various topics to
HDFS. My kafka cluster consists of 5 physical nodes (56 CPU threads,
384G RAM, RAID5s). The consumer is a 28-instance app, 20 consumer
threads in each instance (all using the same consumer group)
This app reads l
Hi,
I'm a researcher from a Belgium university - UCLouvain. Our team is
currently currently working on highly scalable computing architectures
and investigating various messaging platform including Kafka.
For more realistic benchmarking, we'd like to use some real-world traces
including informa
Hi,
i)To upgrade to latest version of kafka from cluster to cloud(how will
you upgrade).Any documents
ii)How will you move the data from cluster we use sql database into cloud
iii) Is same connectivity available in cloud for same systems.Which servers
is taking to whom
Thanks
Hi kafka users,
We are seeing the below behavior in our kafka labs, where in
Kerberos ticket lifetime : 5 mins
Kerberos ticket renewal time : 10 mins.
The kafka broker Is brought up and continues to work fine.
The default wait factor across refreshes of the ticket has been retained (0.8).
So we
Hello Aruna,
if the duplication you are referring to is the duplication of the
events/records that arrive and are consumed to/from Kafka, exactly-once
semantics and transactions are what you are looking for.
Kafka is prepared (since version 0.11 IIRC) to support exactly once, it
means that events
12 matches
Mail list logo