Hi Cephs!
Yesterday we setup a Lifecycle policy for remote all incomplete partial
uploads in the buckets, due this make mistakes between Used space showed in
tools and bucket stats from ceph.
We setup this policy (s3cmd setlifecycle rule.xml s3://GIB --no-ssl)
http://s3.amazonaws.com/do
Hi Felix,
there is a seven year old open issue asking for this feature [0].
An alternative option would be using Benji [1].
Peter
[0] https://tracker.ceph.com/issues/1576
[1] https://benji-backup.me
On 29.05.19 10:25, Felix Hüttner wrote:
> Hi everyone,
>
> We are currently using Ceph as the
The ~4% recommendation in the docs is missleading.
How much you need really depends on how you use it, for CephFS that means:
are you going to put lots of small files on it? Or mainly big files?
If you expect lots of small files: go for a DB that's > ~300 GB. For mostly
large files you are probabl
How many pools do you have? What does your CRUSH map look like?
Wild guess: it's related to your tiny tiny disks (10 GiB) and the
distribution you are seeing in df is due to uneven db/metadata allocations.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://cr
On 5/29/19 11:22 PM, J. Eric Ivancich wrote:
> Hi Wido,
>
> When you run `radosgw-admin gc list`, I assume you are *not* using the
> "--include-all" flag, right? If you're not using that flag, then
> everything listed should be expired and be ready for clean-up. If after
> running `radosgw-admi
Dear Cephalopodians,
I found the messages:
2019-05-30 16:08:51.656363 [ERR] Error -5 reading object
2:0979ae43:::10002954ea6.007c:head
2019-05-30 16:08:51.760660 [WRN] Error(s) ignored for
2:0979ae43:::10002954ea6.007c:head enough copies available
just now in our logs (Mimic 13.2.5)
Am 30.05.19 um 17:00 schrieb Oliver Freyermuth:
> Dear Cephalopodians,
>
> I found the messages:
> 2019-05-30 16:08:51.656363 [ERR] Error -5 reading object
> 2:0979ae43:::10002954ea6.007c:head
> 2019-05-30 16:08:51.760660 [WRN] Error(s) ignored for
> 2:0979ae43:::10002954ea6.007c:hea
Hi everyone,
We will be having Ceph Day London October 24th!
https://ceph.com/cephdays/ceph-day-london-2019/
The CFP is now open for you to get your Ceph related content in front
of the Ceph community ranging from all levels of expertise:
https://forms.zohopublic.com/thingee/form/CephDayLondon2
Hello Mike,
there is no problem adding 100 OSDs at the same time if your cluster is
configured correctly.
Just add the OSDs and let the cluster slowly (as fast as your hardware
supports without service interruption) rebalance.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: ma
Hi Mike,
On 30.05.19 02:00, Mike Cave wrote:
I’d like a s little friction for the cluster as possible as it is in
heavy use right now.
I’m running mimic (13.2.5) on CentOS.
Any suggestions on best practices for this?
You can limit the recovery for example
* max backfills
* recovery max act
Hi all,
we use ceph(hammer) + openstack(mitaka) in my datacenter and there are 300
osds and 3. Because the accident datacenter is powered off, all the servers are
shut down. when power returns to normal ,we start 3 mon service at first, About
two hours later we start 500 osd service,and la
11 matches
Mail list logo