Hi,
on 2019/8/19 16:10, fengyd wrote:
I think when reading/writing to volume/image, tcp connection needs to be
established which needs FD, then the FD count may increase.
But after reading/writing, why the FD count doesn't descrease?
The tcp may be long connections.
__
on 2019/8/20 9:54, fengyd wrote:
I checked the FD information with the command "ls -l /proc/25977/fd" //
here 25977 is Qemu process.
I found that the creation timestamp of the FD was not changed, but the
socket information to which the FD was linked was changed.
So, I guess the FD is reused
Hi
on 2019/8/20 10:30, fengyd wrote:
If the creation timestamp of the FD is not changed, but the socket
information to which the FD was linked is changed, it means new tcp
connection is established.
If there's no reading/wring ongoing, why new tcp connection is still
established and the FD c
on 2019/8/20 10:57, fengyd wrote:
Long connections means new tcp connection which connect the same targets
is reestablished after timeout?
yes, once timeouted, then reconnecting.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
Hi
on 2019/8/20 11:00, fengyd wrote:
I think you're right.
I am not so sure about it. But I think ceph client always wants to know
the cluster's topology, so it needs to communicate with cluster all the
time. The big difference for ceph to other distributed storage is
clients participate in
Hi
on 2019/8/21 20:25, Gesiel Galvão Bernardes wrote:
I`m use a Qemu/kvm(Opennebula) with Ceph/RBD for running VMs, and I
having problems with slowness in aplications that many times not
consuming very CPU or RAM. This problem affect mostly Windows. Appearly
the problem is that normally the ap