Hi,

please share some details about your cluster (especially the hardware)

- how many OSDs are there? How many disks per an OSD machine?
- Do you use dedicated (SSD) OSD journals?
- RAM size, CPUs model, network card bandwidth/model
- Do you have a dedicated cluster network?
- How many VMs (in the whole cluster) are running?
- The total number (in the whole cluster) of attached rbd images
- The total number (in the whole cluster) of concurrent writers

Also,
- does the same problem occur when multiple threads/processes are writing
into a single rbd image?
- are there anything "interesting" in qemu, kernel (both on hypervisors and
OSDs), OSDs logs?

Best regards,
      Alexey


On Wed, Nov 16, 2016 at 9:10 AM, <mehul1.j...@ril.com> wrote:

> Hi All,
>
>
>
> We have a Ceph Storage Cluster and it’s been integrated with our Openstack
> private cloud.
>
> We have created a Pool for Volume which allows our Openstack Private Cloud
> user to create a volume from image and boot from volume.
>
> Additionally our images(both Ubuntu1404 and CentOS 7) are in a raw format.
>
>
>
> One of our use cases is to attach multiple volumes other than “boot
> volume”.
>
> We have observed that when we attach multiple volumes, and try to
> simultaneous writes to these attached volumes for example via the “dd
> command” , all these processes go into “D state (uninterruptible sleep)”.
>
> Also we can see in vmstat output that “bo” values trickling down to zero.
>
> We have checked the network utilization on the compute node which does not
> show any issues.
>
>
>
> Finally after a while system becomes unresponsive and only way to resolve
> is to reboot the VM.
>
>
>
> Some of our version details are as follows.
>
>
>
> Ceph version : 0.80.7
>
> Libvirt version : 1.2.2
>
> Openstack Version : Juno (Mirantis 6.0)
>
>
>
> Please do let me know if anyone has faced a similar issue or have any
> pointers.
>
>
>
> Any direction will be helpful.
>
>
>
> Thanks,
>
> Mehul
>
>
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to