When OSD gets full no further writes are possible.
You need to rebalance the OSDs first.

Jan

> On 09 Oct 2015, at 10:38, 陈积 <che...@letv.com> wrote:
> 
> Hi,
> I have deploy a small ceph cluster on my computer,and do a test like below
> Firstly ,I make the /dev/sdb full manually by put a 8.3GB file on /mnt/tmp 
> ,and then try to write a 300M file through striprados which is a ceph client 
> using libradosstriper. The result is this write failed.
> 
> So,Does ceph has any way to make full use of the left space of other disks in 
> this case?
> 
> 
> sunfch@node1 <mailto:sunfch@node1> tmp]$ df -h 
> Filesystem Size Used Avail Use% Mounted on 
> /dev/mapper/VolGroup-lv_root 27G 21G 4.5G 82% / 
> tmpfs 940M 88K 940M 1% /dev/shm 
> /dev/vda1 485M 40M 421M 9% /boot 
> /dev/sda 10G 1.9G 8.2G 19% /var/lib/ceph/osd/ceph-0 
> /dev/sdb 10G 10G 7.3M 100% /var/lib/ceph/osd/ceph-1 
> /dev/sdc 10G 1.8G 8.3G 18% /var/lib/ceph/osd/ceph-2 
> /dev/sdd 9.8G 1.9G 7.9G 20% /var/lib/ceph/osd/ceph-3 
> /dev/sde 9.8G 1.8G 8.0G 19% /var/lib/ceph/osd/ceph-4 
> /dev/sdf 9.8G 1.8G 8.1G 18% /var/lib/ceph/osd/ceph-5 
> /dev/sdb 10G 10G 7.3M 100% /mnt/tmp 
> 
> [sunfch@node1 <mailto:sunfch@node1> tmp]$ striprados -p chenji -u 300m 300M 
> set up a rados cluster object 
> connected to the rados cluster 
> created an ioctx for our pool 
> created a striper for our pool
> 
> fail
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to