Open files in the system was set to 10Million and users limit is 1million. When
the process was active i was closely watching the open files and it was around
400K so its well within the set limit. Rocksdb open files we tried setting 100,
50, -1 but the results are same.
I am using Rocksdb conf
I believe the "resource temporarily unavailable" actually is related to the
open files, most likely you are hitting the total file descriptor limit.
Sorry if you mentioned this and I missed it, but what was the
max.open.files in your RocksDBConfigSetter when you ran this? Actually,
could you just i
Thanks for reporting this Kalyani, we'll take a look.
By chance can provide log files?
Thanks,
Bill
On Mon, Jul 8, 2019 at 7:43 AM kalyani yarlagadda <
kalyani.yarlagad...@gmail.com> wrote:
> Hi,
>
> I need assistance in the below scenario. Please help me with this.
>
> I am using the hopping ti
Hi Javier,
Your theory could be correct, but it's hard to say exactly without looking
at some more information.
Can you provide your streams configuration and logs (both streams and
broker).
Thanks,
Bill
On Thu, Jul 11, 2019 at 2:55 AM Javier Arias Losada
wrote:
> Hello there,
>
> I managed to
Hi Piotr,
Thanks for reporting this issue. Can you provide full kafka-streams and
broker logs around the timeframe you observed this?
-Bill
On Thu, Jul 11, 2019 at 8:53 AM Piotr Strąk wrote:
> Hello,
>
> I'm investigating an issue in which a Kafka Streams application does not
> consume from o
In addition to the session timeout try increasing the request timeout as well.
We had similar issue and resolved it by increasing the timeouts. As per my
understanding, If you have complex topology then it will take some time for
kafka brokers to create the tasks and assign them to consumers. In
I don't know any other way that manually synching zookeeper ensembles plus
brokers in the new world :(
In my example below, I have three node Zookeeper ensemble with 9 Kafka
brokers
[image: image.png]
First of all the zookeeper conf file under $ZOOKEEPER_HOME/conf/zoo.cfg has
to match
tickTime
Hi Boyang,
Thanks for the quick response.
We are on version 2.2.0.
We are using the following properties on KStreams/consumer:
session.timeout.ms=15000
heartbeat.interval.ms=3000
I was wondering if a member might leak if it satisfies “shouldKeepAlive”
condition in "onExpireHeartbeat" and the co
Thanks for your reply.
Yes, you can assume that nothing is shared between clusters.
There is no specific topic. I'm trying to establish general patterns that
can be applied to achieve this.
Thanks,
Elliot.
On Thu, 11 Jul 2019 at 10:50, Mich Talebzadeh
wrote:
> Hi Elliot,
>
> As you are movin
Hello,
I'm investigating an issue in which a Kafka Streams application does not
consume from one of the partitions it was assigned. I'm using the 2.3.0
version.
All the fetch requests are sent for two partitions only:
> Using older server API v6 to send FETCH
{replica_id=-1,max_wait_time=50
Hi Elliot,
As you are moving the topic from one cluster to another, I assume it
implies a new zookeeper ensemble plus sets of new brokers?
Can you describe the current topic?
${KAFKA_HOME}/bin/kafka-topics.sh --describe --zookeeper ,
, --topic
HTH
Dr Mich Talebzadeh
LinkedIn *
https://ww
Hello,
I like to understand what strategies can be applied to migrate events,
producers, and consumers from one topic to another. Typically I'm thinking
of cases where:
- we wish to migrate a topic from one cluster to another
- high availability - minimise amount of time topic is unavailab
12 matches
Mail list logo