Hi Ceph,
TL;DR: If you have one day a week to work on the next Ceph stable releases [1]
your help would be most welcome.
The Ceph "Long Term Stable" (LTS) releases - currently firefly[3] and hammer[4]
- are used by individuals, non-profits, government agencies and companies for
their productio
Hi,
Le 29/09/2015 19:06, Samuel Just a écrit :
> It's an EIO. The osd got an EIO from the underlying fs. That's what
> causes those asserts. You probably want to redirect to the relevant
> fs maling list.
Thanks.
I didn't get any answer on this from BTRFS developers yet. The problem
seems har
Hello,
I've looked over the crush documentation but I am a little confused.
Perhaps someone here can help me out!
I have three chassis with 6 SSD osds that I use for writeback cache.
I have removed one OSD from each server and I want to make a new
replicated ruleset to use just these thr
What are the best options to setup the Ceph radosgw so it supports
separate/independent "tenants"? What I'm after:
1. Ensure isolation between tenants, ie: no overlap/conflict in bucket
namespace; something separate radosgw "users" doesn't achieve
2. Ability to backup/restore tenants' pools in
On Sat, Oct 03, 2015 at 11:07:22AM +0200, Loic Dachary wrote:
> Hi Ceph,
>
> TL;DR: If you have one day a week to work on the next Ceph stable releases
> [1] your help would be most welcome.
I'd like to throw my name in.
As of August, I work on Ceph development for Dreamhost. Most of my work
foc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
We are still struggling with this and have tried a lot of different
things. Unfortunately, Inktank (now Red Hat) no longer provides
consulting services for non-Red Hat systems. If there are some
certified Ceph consultants in the US that we can do bot
Hi Robin,
On 03/10/2015 21:38, Robin H. Johnson wrote:
> On Sat, Oct 03, 2015 at 11:07:22AM +0200, Loic Dachary wrote:
>> Hi Ceph,
>>
>> TL;DR: If you have one day a week to work on the next Ceph stable releases
>> [1] your help would be most welcome.
> I'd like to throw my name in.
>
> As of Au
Hi,
I don't know what brand those 4TB spindles are, but I know that mine are
very bad at doing write at the same time as read. Especially small read
write.
This has an absurdly bad effect when doing maintenance on ceph. That being
said we see a lot of difference between dumpling and hammer in per