[ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Waterbly, Dan
Hello, I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x replication). I am confused by the usage ceph df is reporting and am hoping someone can shed some light on this. Here is what I see when I run ceph df GLOBAL: SIZEAVAIL RAW USED %RAW USED 1.0

[ceph-users] Drive for Wal and Db

2018-10-20 Thread Robert Stanford
Our OSDs are BlueStore and are on regular hard drives. Each OSD has a partition on an SSD for its DB. Wal is on the regular hard drives. Should I move the wal to share the SSD with the DB? Regards R ___ ceph-users mailing list ceph-users@lists.ceph.

Re: [ceph-users] A basic question on failure domain

2018-10-20 Thread Maged Mokhtar
On 20/10/18 05:28, Cody wrote: Hi folks, I have a rookie question. Does the number of the buckets chosen as the failure domain must be equal or greater than the number of replica (or k+m for erasure coding)? E.g., for an erasure code profile where k=4, m=2, failure domain=rack, does it only w

Re: [ceph-users] Drive for Wal and Db

2018-10-20 Thread Maged Mokhtar
On 20/10/18 18:57, Robert Stanford wrote:  Our OSDs are BlueStore and are on regular hard drives. Each OSD has a partition on an SSD for its DB.  Wal is on the regular hard drives.  Should I move the wal to share the SSD with the DB?  Regards R ___

[ceph-users] ceph df space usage confusion - balancing needed?

2018-10-20 Thread Oliver Freyermuth
Dear Cephalopodians, as many others, I'm also a bit confused by "ceph df" output in a pretty straightforward configuration. We have a CephFS (12.2.7) running, with 4+2 EC profile. I get: # ceph df GLOBAL: SIZE

Re: [ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Serkan Çoban
4.65TiB includes size of wal and db partitions too. On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan wrote: > > Hello, > > > > I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x > replication). > > > > I am confused by the usage ceph df is reporting and am hoping someone can > sh

Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-20 Thread Janne Johansson
Do mind that drives may have more than one pool on them, so RAW space is what it says, how much free space there is. Then the avail and %USED on per-pool stats will take replication into account, it can tell how much data you may write into that particular pool, given that pools replication or EC s

Re: [ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Waterbly, Dan
I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers seem very high to me. Get Outlook for iOS On Sat, Oct 20, 2018 at 10:27 AM -0700, "Serkan Çoban" mailto:cobanser...@gmail.com>> wrote: 4.65TiB includes size of wal and db partitions too. On Sat

Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-20 Thread Oliver Freyermuth
Dear Janne, yes, of course. But since we only have two pools here, this can not explain the difference. The metadata is replicated (3 copies) across ssd drives, and we have < 3 TB of total raw storage for that. So looking at the raw space usage, we can ignore that. All the rest is used for t

Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-20 Thread Janne Johansson
Yes, if you have uneven sizes I guess you could end up in a situation where you have lots of 1TB OSDs and a number of 2TB OSD but pool replication forces the pool to have one PG replica on the 1TB OSD, then it would be possible to state "this pool cant write more than X G" but when it is full, ther

Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-20 Thread Oliver Freyermuth
All OSDs are of the very same size. One OSD host has slightly more disks (33 instead of 31), though. So also that that can't explain the hefty difference. I attach the output of "ceph osd tree" and "ceph osd df". The crush rule for the ceph_data pool is: rule cephfs_data { id 2

Re: [ceph-users] A basic question on failure domain

2018-10-20 Thread Cody
That was clearly explained. Thank you so much! Best regards, Cody On Sat, Oct 20, 2018 at 1:02 PM Maged Mokhtar wrote: > > > > On 20/10/18 05:28, Cody wrote: > > Hi folks, > > > > I have a rookie question. Does the number of the buckets chosen as the > > failure domain must be equal or greater th

Re: [ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Jakub Jaszewski
Hi Dan, Did you configure block.wal/block.db as separate devices/partition (osd_scenario: non-collocated or lvm for clusters installed using ceph-ansbile playbooks )? I run Ceph version 13.2.1 with non-collocated data.db and have the same situation - the sum of block.db partitions' size is displa

Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-20 Thread Oliver Freyermuth
Ok, I'll try out the balancer end of the upcoming week then (after we've fixed a HW-issue with one of our mons and the cooling system). Until then, any further advice and whether upmap is recommended over crush-compat (all clients are Luminous) are welcome ;-). Cheers, Oliver Am 20.10.18 um

Re: [ceph-users] slow_used_bytes - SlowDB being used despite lots of space free in BlockDB on SSD?

2018-10-20 Thread Nick Fisk
> >> On 10/18/2018 7:49 PM, Nick Fisk wrote: > >>> Hi, > >>> > >>> Ceph Version = 12.2.8 > >>> 8TB spinner with 20G SSD partition > >>> > >>> Perf dump shows the following: > >>> > >>> "bluefs": { > >>> "gift_bytes": 0, > >>> "reclaim_bytes": 0, > >>> "db_total_bytes":

Re: [ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Waterbly, Dan
Hi Jakub, No, my setup seems to be the same as yours. Our system is mainly for archiving loads of data. This data has to be stored forever and allow reads, albeit seldom considering the number of objects we will store vs the number of objects that ever will be requested. It just really seems o

[ceph-users] Verifying the location of the wal

2018-10-20 Thread Robert Stanford
An email from this list stated that the wal would be created in the same place as the db, if the db were specified when running ceph-volume lvm create, and the db were specified on that command line. I followed those instructions and like the other person writing to this list today, I was surpris

Re: [ceph-users] Verifying the location of the wal

2018-10-20 Thread Serkan Çoban
ceph-bluestore-tool can show you the disk labels. ceph-bluestore-tool show-label --dev /dev/sda1 On Sun, Oct 21, 2018 at 1:29 AM Robert Stanford wrote: > > > An email from this list stated that the wal would be created in the same > place as the db, if the db were specified when running ceph-vol

Re: [ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Serkan Çoban
you have 24M objects, not 2.4M. Each object will eat 64KB of storage, so 24M objects uses 1.5TB storage. Add 3x replication to that, it is 4.5TB On Sat, Oct 20, 2018 at 11:47 PM Waterbly, Dan wrote: > > Hi Jakub, > > No, my setup seems to be the same as yours. Our system is mainly for > archivin