Phil. The behavior you mentioned sounds like that processor pulled flow
files from the queue but had not yet transferred them anywhere. If you see
that again I strongly recommend you gather a thread dump.
Joe
On Wed, Sep 1, 2021 at 7:56 PM Phil H wrote:
> Hi Joe,
>
> It’s a custom one, but it
Hi Joe,
It’s a custom one, but it is effectively just a routing filter component (read
the data, send the flow file out on relationship A or B based on what it
finds). Nothing exotic in terms of how it interacts with the flowfiles.
After restarting all nodes, the queue worked normally again.
Phil
What processor reads from that queue that appears unmoving?
Thanks
On Wed, Sep 1, 2021 at 3:51 PM Phil H wrote:
> And once reconnected again, no data passes that queue - it all just piles
> up there (the queue count matching the number of items sent into the
> cluster). However if I try a
And once reconnected again, no data passes that queue - it all just piles
up there (the queue count matching the number of items sent into the
cluster). However if I try and list the queue, it claims there are no files
in it. Very very confused!
On Thu, 2 Sep 2021 at 08:39, Phil H wrote:
> Okay,
Okay, found the offload, but the data is still stuck on the “offloaded”
node, in a “single node” queue (I am bringing the data to a single node to
deduplicate multiple parallel inputs).
If I refresh the UI, I can see the missing items numbered in the queue, but
can’t open the queue because the oth
Thanks Shaun, I am using 13.2 in this instance. I can see the disconnect
option in the Cluster control UI, but no mention of offloading data. Where
should I be looking?
On Thu, 2 Sep 2021 at 08:20, Shawn Weeks wrote:
> On newer versions there is an option in the UI to Offload the data if you
> h
Further to my previous email, I note that if I DO kill -9 the NiFi process
on one of my cluster members, the other cluster members also stop reading
data via their GetTCP processors … why is this? It would seem to undermine
one of the major reasons to have a cluster?
On Thu, 2 Sep 2021 at 07:36, P
On newer versions there is an option in the UI to Offload the data if you have
NiFi's cluster load balancing setup. Then you'd disconnect the node and shut it
down.
Thanks
Shawn
-Original Message-
From: Phil H
Sent: Wednesday, September 1, 2021 4:36 PM
To: dev@nifi.apache.org
Subject:
Hi there,
I am noticing a number of situations where shutting down one node in a
cluster is leaving data stranded in the flows on that shut down server.
Is there any way to tell NiFi to ship data off to other cluster members
before it shuts down? Note I am restarting via the nifi.sh script, not
perhaps we can have a jira label and PR naming convention for things that
fall under this as well, to set consistent expectations for review and work
From: Kevin Doran
Reply: dev@nifi.apache.org
Date: August 31, 2021 at 18:06:48
To: dev@nifi.apache.org
Subject: Re: Flood of JIRAs and presu
Hi Dev Team,
I have tested different compression types which is a feature of PutParquet and
ConvertAvroToParquet Processors on different NiFi versions.
To avoid confusion, i will give a summary information:
* Compression types (UNCOMPRESSED, GZIP, SNAPPY) of PutParquet Processor works
correctl
11 matches
Mail list logo