On Tue, 30 Apr 2019 at 19:11, Adrien Gillard
wrote:
> On Tue, Apr 30, 2019 at 10:06 AM Igor Podlesny wrote:
> >
> > On Tue, 30 Apr 2019 at 04:13, Adrien Gillard
> wrote:
> > > I would add that the use of cache tiering, though still possible, is
> not recommended
> >
> > It lacks references. CEP
On Tue, Apr 30, 2019 at 10:06 AM Igor Podlesny wrote:
>
> On Tue, 30 Apr 2019 at 04:13, Adrien Gillard wrote:
> > I would add that the use of cache tiering, though still possible, is not
> > recommended
>
> It lacks references. CEPH docs I gave links to didn't say so.
The cache tiering document
On Tue, 30 Apr 2019 at 04:13, Adrien Gillard wrote:
> I would add that the use of cache tiering, though still possible, is not
> recommended
It lacks references. CEPH docs I gave links to didn't say so.
> comes with its own challenges.
It's challenging for some to not over-quote when replying,
I would add that the use of cache tiering, though still possible, is not
recommended and comes with its own challenges.
On Mon, Apr 29, 2019 at 11:49 AM Igor Podlesny wrote:
> On Mon, 29 Apr 2019 at 16:19, Rainer Krienke
> wrote:
> [...]
> > - Do I still (nautilus) need two pools for EC based R
On Mon, 29 Apr 2019 at 16:19, Rainer Krienke wrote:
[...]
> - Do I still (nautilus) need two pools for EC based RBD images, one EC
> data pool and a second replicated pool for metadatata?
The answer is given at
http://docs.ceph.com/docs/nautilus/rados/operations/erasure-code/#erasure-coding-with-
On Mon, 29 Apr 2019 at 16:37, Burkhard Linke
wrote:
> On 4/29/19 11:19 AM, Rainer Krienke wrote:
[...]
> > - I also thought about the different k+m settings for a EC pool, for
> > example k=4, m=2 compared to k=8 and m=2. Both settings allow for two
> > OSDs to fail without any data loss, but I as
Hi,
On 4/29/19 11:19 AM, Rainer Krienke wrote:
I am planning to set up a ceph cluster and already implemented a test
cluster where we are going to use RBD images for data storage (9 hosts,
each host has 16 OSDs, each OSD 4TB).
We would like to use erasure coded (EC) pools here, and so all OSD a
I am planning to set up a ceph cluster and already implemented a test
cluster where we are going to use RBD images for data storage (9 hosts,
each host has 16 OSDs, each OSD 4TB).
We would like to use erasure coded (EC) pools here, and so all OSD are
bluestore. Since several projects are going to