Interestingly, this did not resolve the issue. After another pod restart,
I have the same error at a MergeContent processor. I verified that these
settings are all true.
org.apache.nifi.processor.exception.FlowFileAccessException: Could not read
from
StandardFlowFileRecord[uuid=b38e063f-23a4-47f
Good idea. I've made the change, and will report back if I see any more
issues.
On Fri, Feb 14, 2020 at 2:04 PM Joe Witt wrote:
> No it should not. But if you want to rule out underlying storage not
> getting the writes actually written you can use
>
> nifi.flowfile.repository.always.sync=true
No it should not. But if you want to rule out underlying storage not
getting the writes actually written you can use
nifi.flowfile.repository.always.sync=true
nifi.content.repository.always.sync=true
nifi.provenance.repository.always.sync=true
That will impact performance as it means we force sy
Another data point.. I noticed that the only nodes that seem to have this
error are ones with recent pod restarts. Perhaps this is just a risk if
the nifi process suddenly restarts?
In the log, I only see normal log statements, and then suddenly:
2020-02-13 18:20:06,810 INFO [Heartbeat Monitor Th
Sure, sorry for the delay.
It's an AWS EBS persistent volume. Here are the PV and storage class
objects. It's single-zone (us-east-1d):
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controlle
The more i think about this, the more i believe that the PV had to be
provisioned as some kind of network attached storage, and the exception had to
be happening because of intermittent network issue for this NAS.
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday,
Hi,
What cloud are you running this on?
Kubernetes version?
Can you share the storageClass you use for creating the repo PVCs?
what is the topology of the cluster? Muli zone?
Best regards,
Endre
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, February 10, 2020
The more I look at the error the more it looks like the container couldn't
access the persistence volume for some period of time. Can you share your yaml
configuration minus sensitive stuff and what environment your deploying K8s on,
aka Azure, ECS, Rancher, Openshift, etc.
Thanks
Shawn
On 2/
I've been testing out NiFi on K8s quite a bit lately and might be able to help.
What are you using for your persistent volume, this kinda sounds like NiFi
can't always access data in the content_repository folder?
Thanks
Shawn
On 2/10/20, 11:53 AM, "Joe Witt" wrote:
...sent a little too
...sent a little too quick. Also seems like if it is related to your k8s
process that the issue would occur on older nifi releases too. So that
might be an interesting test for you.
On Mon, Feb 10, 2020 at 9:52 AM Joe Witt wrote:
> Joe
>
> If you can replicate this in a non k8s environment tha
Joe
If you can replicate this in a non k8s environment that would likely make
resolution easier. Far too many variables/questions at this stage.
Thanks
On Mon, Feb 10, 2020 at 8:15 AM Joe Gresock wrote:
> I don't know if there is something wrong with my Kubernetes-ized deployment
> of NiFi, b
I don't know if there is something wrong with my Kubernetes-ized deployment
of NiFi, but I'm seeing the following error crop up a lot in my NiFi 1.11.1
deployment. I can see the flow file attributes in "List Queue" for flow
files affected by this problem, but the content claim is missing.
It's no
12 matches
Mail list logo