OSD different sizes are used for different tasks. Such as cache. My concern is 
4TB OSD, used as a storage pool. Place them engaged is not the same.

 

/Dev/sdf1 4.0T 1.7T 2.4T 42% / var / lib / ceph / osd / ceph-4

/Dev/sdd1 4.0T 1.7T 2.4T 41% / var / lib / ceph / osd / ceph-2

/Dev/sdb1 4.0T 1.9T 2.2T 46% / var / lib / ceph / osd / ceph-0

/Dev/sde1 4.0T 2.1T 2.0T 51% / var / lib / ceph / osd / ceph-3

/Dev/sdc1 4.0T 1.8T 2.3T 45% / var / lib / ceph / osd / ceph-1

 

For example, on /dev/sdd1 42%, and /dev/sde1 51%. They are 1 pool.

Maybe there is an option which can be used to fill evenly OSD?

 

From: John Petrini [mailto:jpetr...@coredial.com] 
Sent: Friday, December 02, 2016 1:04 PM
To: Волков Павел (Мобилон)
Cc: ceph-users
Subject: Re: [ceph-users] ceph - even filling disks

 

You can reweight the OSD's either automatically based on utilization (ceph osd 
reweight-by-utilization) or by hand.

 

See: 

https://ceph.com/planet/ceph-osd-reweight/

http://docs.ceph.com/docs/master/rados/operations/control/#osd-subsystem

 

It's probably not ideal to have OSD's of such different sizes on a node.




___

John Petrini

NOC Systems Administrator   //   CoreDial, LLC   //    <http://coredial.com/> 
coredial.com   //    <https://twitter.com/coredial> Описание: Рисунок удален 
отправителем. Twitter    <http://www.linkedin.com/company/99631> Описание: 
Рисунок удален отправителем. LinkedIn    
<https://plus.google.com/104062177220750809525/posts> Описание: Рисунок удален 
отправителем. Google Plus    <http://success.coredial.com/blog> Описание: 
Рисунок удален отправителем. Blog 
Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422 
P: 215.297.4400 x232   //   F: 215.297.4401   //   E:  
<mailto:jpetr...@coredial.com> jpetr...@coredial.com

 
<http://cta-redirect.hubspot.com/cta/redirect/210539/4c492538-6e4b-445e-9480-bef676787085>
 Описание: Рисунок удален отправителем. Exceptional people. Proven Processes. 
Innovative Technology. Discover CoreDial - watch our video

The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential and/or privileged material. Any 
review, retransmission,  dissemination or other use of, or taking of any action 
in reliance upon, this information by persons or entities other than the 
intended recipient is prohibited. If you received this in error, please contact 
the sender and delete the material from any computer.

 

On Fri, Dec 2, 2016 at 12:36 AM, Волков Павел (Мобилон) <vol...@mobilon.ru> 
wrote:

Good day.

I have set up the repository ceph and created several pools on the hdd 4TB. My 
problem lies in uneven filling hdd.

 

root@ceph-node1:~# df -H

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda1       236G  2.7G  221G   2% /

none            4.1k     0  4.1k   0% /sys/fs/cgroup

udev             30G  4.1k   30G   1% /dev

tmpfs           6.0G  1.1M  6.0G   1% /run

none            5.3M     0  5.3M   0% /run/lock

none             30G  8.2k   30G   1% /run/shm

none            105M     0  105M   0% /run/user

/dev/sdf1       4.0T  1.7T  2.4T  42% /var/lib/ceph/osd/ceph-4

/dev/sdg1       395G  329G   66G  84% /var/lib/ceph/osd/ceph-5

/dev/sdi1       195G  152G   44G  78% /var/lib/ceph/osd/ceph-7

/dev/sdd1       4.0T  1.7T  2.4T  41% /var/lib/ceph/osd/ceph-2

/dev/sdh1       395G  330G   65G  84% /var/lib/ceph/osd/ceph-6

/dev/sdb1       4.0T  1.9T  2.2T  46% /var/lib/ceph/osd/ceph-0

/dev/sde1       4.0T  2.1T  2.0T  51% /var/lib/ceph/osd/ceph-3

/dev/sdc1       4.0T  1.8T  2.3T  45% /var/lib/ceph/osd/ceph-1

 

 

On the test machine, this leads to an overflow error CDM and further incorrect 
operation. 

How to make that all hdd filled equally?


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to