A bit of topic, I just upgraded the ceph test cluster to 7.6 and my
syslog servers are flooded with these
pam_unix(sudo:session): session opened for user root
Any one knows how to get rid of these.
___
ceph-users mailing list
ceph-users@lists.c
Ignore,
had to modify
Defaults !syslog
to
Defaults !syslog,!pam_session
A bit of topic, I just upgraded the ceph test cluster to 7.6 and my
syslog servers are flooded with these
pam_unix(sudo:session): session opened for user root
Any one knows how to get rid of these.
_
Is this normal or expected that lvm can have high utilization while the
disk the logical volume is, has not? Or do I need to do still custom
optimizations for the ceph rbd backend?
https://www.redhat.com/archives/linux-lvm/2013-October/msg00022.html
Atop:
LVM | Groot-LVroot | busy 74%
I've also seen this behavior sometimes (on real hardware without VMs
or Ceph involved).
Somewhat related: please don't use legacy VirtIO block devices
(virtio-blk), they suck for various reasons: slow, no support for
TRIM, ...
Use a VirtIO SCSI controller instead
Paul
--
Paul Emmerich
Look
On Sun, Jan 13, 2019 at 1:43 PM Adam Tygart wrote:
>
> Restarting the nodes causes the hanging again. This means that this is
> workload dependent and not a transient state.
>
> I believe I've tracked down what is happening. One user was running
> 1500-2000 jobs in a single directory with 92000+ f