[ceph-users] Lifecycle policy completed but not done

2019-05-30 Thread EDH - Manuel Rios Fernandez
Hi Cephs! Yesterday we setup a Lifecycle policy for remote all incomplete partial uploads in the buckets, due this make mistakes between Used space showed in tools and bucket stats from ceph. We setup this policy (s3cmd setlifecycle rule.xml s3://GIB --no-ssl) http://s3.amazonaws.com/do

Re: [ceph-users] Global Data Deduplication

2019-05-30 Thread Peter Wienemann
Hi Felix, there is a seven year old open issue asking for this feature [0]. An alternative option would be using Benji [1]. Peter [0] https://tracker.ceph.com/issues/1576 [1] https://benji-backup.me On 29.05.19 10:25, Felix Hüttner wrote: > Hi everyone, > > We are currently using Ceph as the

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-30 Thread Paul Emmerich
The ~4% recommendation in the docs is missleading. How much you need really depends on how you use it, for CephFS that means: are you going to put lots of small files on it? Or mainly big files? If you expect lots of small files: go for a DB that's > ~300 GB. For mostly large files you are probabl

Re: [ceph-users] Balancer: uneven OSDs

2019-05-30 Thread Paul Emmerich
How many pools do you have? What does your CRUSH map look like? Wild guess: it's related to your tiny tiny disks (10 GiB) and the distribution you are seeing in df is due to uneven db/metadata allocations. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://cr

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-05-30 Thread Wido den Hollander
On 5/29/19 11:22 PM, J. Eric Ivancich wrote: > Hi Wido, > > When you run `radosgw-admin gc list`, I assume you are *not* using the > "--include-all" flag, right? If you're not using that flag, then > everything listed should be expired and be ready for clean-up. If after > running `radosgw-admi

[ceph-users] Object read error - enough copies available

2019-05-30 Thread Oliver Freyermuth
Dear Cephalopodians, I found the messages: 2019-05-30 16:08:51.656363 [ERR] Error -5 reading object 2:0979ae43:::10002954ea6.007c:head 2019-05-30 16:08:51.760660 [WRN] Error(s) ignored for 2:0979ae43:::10002954ea6.007c:head enough copies available just now in our logs (Mimic 13.2.5)

Re: [ceph-users] Object read error - enough copies available

2019-05-30 Thread Oliver Freyermuth
Am 30.05.19 um 17:00 schrieb Oliver Freyermuth: > Dear Cephalopodians, > > I found the messages: > 2019-05-30 16:08:51.656363 [ERR] Error -5 reading object > 2:0979ae43:::10002954ea6.007c:head > 2019-05-30 16:08:51.760660 [WRN] Error(s) ignored for > 2:0979ae43:::10002954ea6.007c:hea

[ceph-users] [events] Ceph Day London - October 24 - CFP now open

2019-05-30 Thread Mike Perez
Hi everyone, We will be having Ceph Day London October 24th! https://ceph.com/cephdays/ceph-day-london-2019/ The CFP is now open for you to get your Ceph related content in front of the Ceph community ranging from all levels of expertise: https://forms.zohopublic.com/thingee/form/CephDayLondon2

Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-30 Thread Martin Verges
Hello Mike, there is no problem adding 100 OSDs at the same time if your cluster is configured correctly. Just add the OSDs and let the cluster slowly (as fast as your hardware supports without service interruption) rebalance. -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: ma

Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-30 Thread Michel Raabe
Hi Mike, On 30.05.19 02:00, Mike Cave wrote: I’d like a s little friction for the cluster as possible as it is in heavy use right now. I’m running mimic (13.2.5) on CentOS. Any suggestions on best practices for this? You can limit the recovery for example * max backfills * recovery max act

[ceph-users] auth: could not find secret_id=6403

2019-05-30 Thread 解决
Hi all, we use ceph(hammer) + openstack(mitaka) in my datacenter and there are 300 osds and 3. Because the accident datacenter is powered off, all the servers are shut down. when power returns to normal ,we start 3 mon service at first, About two hours later we start 500 osd service,and la