Re: Primary node vs shutdown

2021-09-01 Thread Joe Witt
Phil. The behavior you mentioned sounds like that processor pulled flow files from the queue but had not yet transferred them anywhere. If you see that again I strongly recommend you gather a thread dump. Joe On Wed, Sep 1, 2021 at 7:56 PM Phil H wrote: > Hi Joe, > > It’s a custom one, but it

Re: Primary node vs shutdown

2021-09-01 Thread Phil H
Hi Joe, It’s a custom one, but it is effectively just a routing filter component (read the data, send the flow file out on relationship A or B based on what it finds). Nothing exotic in terms of how it interacts with the flowfiles. After restarting all nodes, the queue worked normally again.

Re: Primary node vs shutdown

2021-09-01 Thread Joe Witt
Phil What processor reads from that queue that appears unmoving? Thanks On Wed, Sep 1, 2021 at 3:51 PM Phil H wrote: > And once reconnected again, no data passes that queue - it all just piles > up there (the queue count matching the number of items sent into the > cluster). However if I try a

Re: Primary node vs shutdown

2021-09-01 Thread Phil H
And once reconnected again, no data passes that queue - it all just piles up there (the queue count matching the number of items sent into the cluster). However if I try and list the queue, it claims there are no files in it. Very very confused! On Thu, 2 Sep 2021 at 08:39, Phil H wrote: > Okay,

Re: Primary node vs shutdown

2021-09-01 Thread Phil H
Okay, found the offload, but the data is still stuck on the “offloaded” node, in a “single node” queue (I am bringing the data to a single node to deduplicate multiple parallel inputs). If I refresh the UI, I can see the missing items numbered in the queue, but can’t open the queue because the oth

Re: Primary node vs shutdown

2021-09-01 Thread Phil H
Thanks Shaun, I am using 13.2 in this instance. I can see the disconnect option in the Cluster control UI, but no mention of offloading data. Where should I be looking? On Thu, 2 Sep 2021 at 08:20, Shawn Weeks wrote: > On newer versions there is an option in the UI to Offload the data if you > h

Re: Primary node vs shutdown

2021-09-01 Thread Phil H
Further to my previous email, I note that if I DO kill -9 the NiFi process on one of my cluster members, the other cluster members also stop reading data via their GetTCP processors … why is this? It would seem to undermine one of the major reasons to have a cluster? On Thu, 2 Sep 2021 at 07:36, P

RE: Primary node vs shutdown

2021-09-01 Thread Shawn Weeks
On newer versions there is an option in the UI to Offload the data if you have NiFi's cluster load balancing setup. Then you'd disconnect the node and shut it down. Thanks Shawn -Original Message- From: Phil H Sent: Wednesday, September 1, 2021 4:36 PM To: dev@nifi.apache.org Subject:

Primary node vs shutdown

2021-09-01 Thread Phil H
Hi there, I am noticing a number of situations where shutting down one node in a cluster is leaving data stranded in the flows on that shut down server. Is there any way to tell NiFi to ship data off to other cluster members before it shuts down? Note I am restarting via the nifi.sh script, not

Re: Flood of JIRAs and presumably PRs to follow for junit-5 migration?

2021-09-01 Thread Otto Fowler
perhaps we can have a jira label and PR naming convention for things that fall under this as well, to set consistent expectations for review and work From: Kevin Doran Reply: dev@nifi.apache.org Date: August 31, 2021 at 18:06:48 To: dev@nifi.apache.org Subject: Re: Flood of JIRAs and presu

RE: PutParquet - Compression Type: SNAPPY (NiFi 1.14.0)

2021-09-01 Thread Bilal Bektas
Hi Dev Team, I have tested different compression types which is a feature of PutParquet and ConvertAvroToParquet Processors on different NiFi versions. To avoid confusion, i will give a summary information: * Compression types (UNCOMPRESSED, GZIP, SNAPPY) of PutParquet Processor works correctl