[ceph-users] Re: Using RBD to pack billions of small files

2021-02-03 Thread Burkhard Linke

Hi,

On 2/2/21 9:32 PM, Loïc Dachary wrote:

Hi Greg,

On 02/02/2021 20:34, Gregory Farnum wrote:



*snipsnap*


Right. Dan's comment gave me pause: it does not seem to be
a good idea to assume a RBD image of an infinite size. A friend who read this
thread suggested a sensible approach (which also is in line with the
Haystack paper): instead of making a single gigantic image, make
multiple 1TB images. The index is bigger

SHA256 sum of the artifact => name/uuid of the 1TB image,offset,size

instead of

SHA256 sum of the artifact  => offset,size



Just my 2 cents:

You could use the first byte of the SHA sum to identify the image, e.g. 
using a fixed number of 256 images. Or some flexible approach similar to 
the way filestore used to store rados objects.



Regards,

Burkhard

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: XFS block size on RBD / EC vs space amplification

2021-02-03 Thread Konstantin Shalygin
Actually, with last Igor patches default min alloc size for hdd is 4K



k

Sent from my iPhone

> On 2 Feb 2021, at 13:12, Gilles Mocellin  
> wrote:
> 
> Hello,
> 
> As we know, with 64k for bluestore_min_alloc_size_hdd (I'm only using HDDs),
> in certain conditions, especially with erasure coding,
> there's a leak of space while writing objects smaller than 64k x k (EC:k+m).
> 
> Every object is divided in k elements, written on different OSD.
> 
> My main use case is big (40TB) RBD images mounted as XFS filesystems on Linux 
> servers,
> exposed to our backup software.
> So, it's mainly big files.
> 
> My though, but I'd like some other point of view, is that I could deal with 
> the amplification by using bigger block sizes on my XFS filesystems.
> Instead of reducing bluestore_min_alloc_size_hdd on all OSDs.
> 
> What do you think ?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using RBD to pack billions of small files

2021-02-03 Thread Loïc Dachary

>
> Just my 2 cents:
>
> You could use the first byte of the SHA sum to identify the image, e.g. using 
> a fixed number of 256 images. Or some flexible approach similar to the way 
> filestore used to store rados objects. 

A friend suggested the same to save space. Good idea.




OpenPGP_signature
Description: OpenPGP digital signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using RBD to pack billions of small files

2021-02-03 Thread Burkhard Linke

Hi,

On 2/3/21 9:41 AM, Loïc Dachary wrote:

Just my 2 cents:

You could use the first byte of the SHA sum to identify the image, e.g. using a 
fixed number of 256 images. Or some flexible approach similar to the way 
filestore used to store rados objects.

A friend suggested the same to save space. Good idea.



If you want to further reduce the index size, you can just store the 
offset, and the first 4? 8? bytes at that offset define the size of the 
following artifacts. That's similar to the way Pascal used to store 
strings in the good ol' times. You might also want to think about using 
a complete header which also includes the artifact's name etc. This will 
allow you to rebuild the index if it becomes corrupted. The storage 
overhead should be insignificant


Your index will become a simple mapping of SHA sum -> offset, and you 
might also be able to use optimized implementations.



Regards,

Burkhard

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Worst thing that can happen if I have size= 2

2021-02-03 Thread Mario Giammarco
Hello,
Imagine this situation:
- 3 servers with ceph
- a pool with size 2 min 1

I know perfectly the size 3 and min 2 is better.
I would like to know what is the worst thing that can happen:

- a disk breaks and another disk breaks before ceph has reconstructed
second replica, ok I lose data

- if network goes down and so monitors lose quorum do ceph still write on
disks?

What else?

Thanks,
Mario
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Magnus HAGDORN
if a OSD becomes unavailble (broken disk, rebooting server) then all
I/O to the PGs stored on that OSD will block until replication level of
2 is reached again. So, for a highly available cluster you need a
replication level of 3


On Wed, 2021-02-03 at 10:24 +0100, Mario Giammarco wrote:
> Hello,
> Imagine this situation:
> - 3 servers with ceph
> - a pool with size 2 min 1
>
> I know perfectly the size 3 and min 2 is better.
> I would like to know what is the worst thing that can happen:
>
> - a disk breaks and another disk breaks before ceph has reconstructed
> second replica, ok I lose data
>
> - if network goes down and so monitors lose quorum do ceph still
> write on
> disks?
>
> What else?
>
>
The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Max Krasilnikov
День добрий! 

 Wed, Feb 03, 2021 at 09:29:52AM +, Magnus.Hagdorn wrote: 

> if a OSD becomes unavailble (broken disk, rebooting server) then all
> I/O to the PGs stored on that OSD will block until replication level of
> 2 is reached again. So, for a highly available cluster you need a
> replication level of 3

AFAIK, with min_size 1 it is possible to write even to only active OSD serving
PG.

> On Wed, 2021-02-03 at 10:24 +0100, Mario Giammarco wrote:
> > Hello,
> > Imagine this situation:
> > - 3 servers with ceph
> > - a pool with size 2 min 1
> >
> > I know perfectly the size 3 and min 2 is better.
> > I would like to know what is the worst thing that can happen:
> >
> > - a disk breaks and another disk breaks before ceph has reconstructed
> > second replica, ok I lose data
> >
> > - if network goes down and so monitors lose quorum do ceph still
> > write on
> > disks?
> >
> > What else?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Magnus HAGDORN
On Wed, 2021-02-03 at 09:39 +, Max Krasilnikov wrote:
> > if a OSD becomes unavailble (broken disk, rebooting server) then
> > all
> > I/O to the PGs stored on that OSD will block until replication
> > level of
> > 2 is reached again. So, for a highly available cluster you need a
> > replication level of 3
>
>
> AFAIK, with min_size 1 it is possible to write even to only active
> OSD serving
>
yes, that's correct but then you seriously risk trashing your data

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: XFS block size on RBD / EC vs space amplification

2021-02-03 Thread Gilles Mocellin

Hello, thank you for your response.

Erasure Coding gets better and we really cannot afford the storage 
overhead of x3 replication.
Anyway, as I understand the problem, it is also present with 
replication, just less amplified (blocks are not divided between OSDs, 
just replicated fully).



Le 2021-02-02 16:50, Steven Pine a écrit :

You are unlikely to avoid the space amplification bug by using larger
block sizes. I honestly do not recommend using an EC pool, it is
generally less performant and EC pools are not as well supported by
the ceph development community.

On Tue, Feb 2, 2021 at 5:11 AM Gilles Mocellin
 wrote:


Hello,

As we know, with 64k for bluestore_min_alloc_size_hdd (I'm only
using
HDDs),
in certain conditions, especially with erasure coding,
there's a leak of space while writing objects smaller than 64k x k
(EC:k+m).

Every object is divided in k elements, written on different OSD.

My main use case is big (40TB) RBD images mounted as XFS filesystems
on
Linux servers,
exposed to our backup software.
So, it's mainly big files.

My though, but I'd like some other point of view, is that I could
deal
with the amplification by using bigger block sizes on my XFS
filesystems.
Instead of reducing bluestore_min_alloc_size_hdd on all OSDs.

What do you think ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--

Steven Pine

E  steven.p...@webair.com  |  P  516.938.4100 x

Webair | 501 Franklin Avenue Suite 200, Garden City NY, 11530

webair.com [1]

 [2]   [3]  [4]

NOTICE: This electronic mail message and all attachments transmitted
with it are intended solely for the use of the addressee and may
contain legally privileged proprietary and confidential information.
If the reader of this message is not the intended recipient, or if you
are an employee or agent responsible for delivering this message to
the intended recipient, you are hereby notified that any
dissemination, distribution, copying, or other use of this message or
its attachments is strictly prohibited. If you have received this
message in error, please notify the sender immediately by replying to
this message and delete it from your computer.



Links:
--
[1] http://webair.com
[2] https://www.facebook.com/WebairInc/
[3] https://twitter.com/WebairInc
[4] https://www.linkedin.com/company/webair

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: XFS block size on RBD / EC vs space amplification

2021-02-03 Thread Gilles Mocellin

Hello, thanxs,

I've seen that.
But is it the only solution, do I have alternatives with my use case, 
forcing using big blocks client side ?


I've said in XFS (4k bllock size), but perhaps straight in krbd, as it 
seems the block device is shown as a drive with 512B sectors.


But I don't really know how to interpret that :


sudo lsblk -o PHY-SEC,MIN-IO,OPT-IO /dev/rbd0

PHY-SEC MIN-IO OPT-IO
512  65536  65536

Le 2021-02-03 09:16, Konstantin Shalygin a écrit :

Actually, with last Igor patches default min alloc size for hdd is 4K



k

Sent from my iPhone

On 2 Feb 2021, at 13:12, Gilles Mocellin 
 wrote:


Hello,

As we know, with 64k for bluestore_min_alloc_size_hdd (I'm only using 
HDDs),

in certain conditions, especially with erasure coding,
there's a leak of space while writing objects smaller than 64k x k 
(EC:k+m).


Every object is divided in k elements, written on different OSD.

My main use case is big (40TB) RBD images mounted as XFS filesystems 
on Linux servers,

exposed to our backup software.
So, it's mainly big files.

My though, but I'd like some other point of view, is that I could deal 
with the amplification by using bigger block sizes on my XFS 
filesystems.

Instead of reducing bluestore_min_alloc_size_hdd on all OSDs.

What do you think ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-03 Thread Eugen Block
How do you manage the db_sizes of your SSDs? Is that managed  
automatically by ceph-volume? You could try to add another config and  
see what it does, maybe try to add block_db_size?



Zitat von Tony Liu :


All mon, mgr, crash and osd are upgraded to 15.2.8. It actually
fixed another issue (no device listed after adding host).
But this issue remains.
```
# cat osd-spec.yaml
service_type: osd
service_id: osd-spec
placement:
  host_pattern: ceph-osd-[1-3]
data_devices:
  rotational: 1
db_devices:
  rotational: 0

# ceph orch apply osd -i osd-spec.yaml
Scheduled osd.osd-spec update...

# ceph orch ls --service_name osd.osd-spec --export
service_type: osd
service_id: osd-spec
service_name: osd.osd-spec
placement:
  host_pattern: ceph-osd-[1-3]
spec:
  data_devices:
rotational: 1
  filter_logic: AND
  objectstore: bluestore
```
db_devices still doesn't show up.
Keep scratching my head...


Thanks!
Tony

-Original Message-
From: Eugen Block 
Sent: Tuesday, February 2, 2021 2:20 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec

Hi,

I would recommend to update (again), here's my output from a 15.2.8 test
cluster:


host1:~ # ceph orch ls --service_name osd.default --export
service_type: osd
service_id: default
service_name: osd.default
placement:
   hosts:
   - host4
   - host3
   - host1
   - host2
spec:
   block_db_size: 4G
   data_devices:
 rotational: 1
 size: '20G:'
   db_devices:
 size: '10G:'
   filter_logic: AND
   objectstore: bluestore


Regards,
Eugen


Zitat von Tony Liu :

> Hi,
>
> When build cluster Octopus 15.2.5 initially, here is the OSD service
> spec file applied.
> ```
> service_type: osd
> service_id: osd-spec
> placement:
>   host_pattern: ceph-osd-[1-3]
> data_devices:
>   rotational: 1
> db_devices:
>   rotational: 0
> ```
> After applying it, all HDDs were added and DB of each hdd is created
> on SSD.
>
> Here is the export of OSD service spec.
> ```
> # ceph orch ls --service_name osd.osd-spec --export
> service_type: osd
> service_id: osd-spec
> service_name: osd.osd-spec
> placement:
>   host_pattern: ceph-osd-[1-3]
> spec:
>   data_devices:
> rotational: 1
>   filter_logic: AND
>   objectstore: bluestore
> ```
> Why db_devices doesn't show up there?
>
> When I replace a disk recently, when the new disk was installed and
> zapped, OSD was automatically re-created, but DB was created on HDD,
> not SSD. I assume this is because of that missing db_devices?
>
> I tried to update service spec, the same result, db_devices doesn't
> show up when export it.
>
> Is this some known issue or something I am missing?
>
>
> Thanks!
> Tony
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: replace OSD without PG remapping

2021-02-03 Thread Frank Schilder
You asked about exactly this before: 
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/IGYCYJTAMBDDOD2AQUCJQ6VSUWIO4ELW/#ZJU3555Z5WQTJDPCTMPZ6LOFTIUKKQUS

It is not possible to avoid remapping, because if the PGs are not remapped you 
would have degraded redundancy. In any storage system, this degradation is 
exactly what needs to be avoided/fixed at all cost.

I don't see an issue with health status messages issued by self-healing. That's 
the whole point of ceph, just let it do its job and don't get freaked out by 
health_warn.

You can, however try to keep the window of rebalancing short and this is 
exactly what was discussed in the thread above already. As is pointed out there 
as well, even this is close to pointless. Just deploy a few more disks than you 
need, let the broken ones go and be happy that ceph is taking care of the rest 
and even tells you about its progress.

Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Tony Liu 
Sent: 03 February 2021 03:10:26
To: ceph-users@ceph.io
Subject: [ceph-users] replace OSD without PG remapping

Hi,

There are multiple different procedures to replace an OSD.
What I want is to replace an OSD without PG remapping.

#1
I tried "orch osd rm --replace", which sets OSD reweight 0 and
status "destroyed". "orch osd rm status" shows "draining".
All PGs on this OSD are remapped. Checked "pg dump", can't find
this OSD any more.

1) Given [1], setting weight 0 seems better than setting reweight 0.
Is that right? If yes, should we change the behavior of "orch osd
rm --replace"?

2) "ceph status" doesn't show anything about OSD draining.
Is there any way to see the progress of draining?
Is there actually copy happening? The PG on this OSD is remapped
and copied to another OSD, right?

3) When OSD is replaced, there will be remapping and backfilling.

4) There is remapping in #2 and remapping again in #3.
I want to avoid it.

#2
Is there any procedure that doesn't mark OSD out (set reweight 0),
neither set weight 0, which should keep PG map unchanged, but just
warn about less redundancy (one out of 3 OSDs of PG is down), and
when OSD is replaced, no remapping, just data backfilling?

[1] 
https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/


Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] 15.2.9 ETA?

2021-02-03 Thread David Orman
Hi,

We're looking forward to a few of the major bugfixes for breaking mgr
issues with larger clusters that are merged into the Octopus branch, as
well as the updated cheroot pushed to EPEL that should make it into the
next container build. It's been quite some time since the last (15.2.8)
release, and we were curious if there was an ETA on the 15.2.9 release
being cut, or at least a place to look and determine status. We checked the
docs and didn't see a way to gauge estimated release dates, so thought we'd
ask here.

Thanks!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Adam Boyhan
Isn't this somewhat reliant on the OSD type? 

Redhat/Micron/Samsung/Supermicro have all put out white papers backing the idea 
of 2 copies on NVMe's as safe for production. 


From: "Magnus HAGDORN"  
To: pse...@avalon.org.ua 
Cc: "ceph-users"  
Sent: Wednesday, February 3, 2021 4:43:08 AM 
Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2 

On Wed, 2021-02-03 at 09:39 +, Max Krasilnikov wrote: 
> > if a OSD becomes unavailble (broken disk, rebooting server) then 
> > all 
> > I/O to the PGs stored on that OSD will block until replication 
> > level of 
> > 2 is reached again. So, for a highly available cluster you need a 
> > replication level of 3 
> 
> 
> AFAIK, with min_size 1 it is possible to write even to only active 
> OSD serving 
> 
yes, that's correct but then you seriously risk trashing your data 

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336. 
___ 
ceph-users mailing list -- ceph-users@ceph.io 
To unsubscribe send an email to ceph-users-le...@ceph.io 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread DHilsbos
Adam;

I'd like to see that / those white papers.

I suspect what they're advocating is multiple OSD daemon processes per NVMe 
device.  This is something which can improve performance.  Though I've never 
done it, I believe you partition the device, and then create your OSD pointing 
at a partition.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com

-Original Message-
From: Adam Boyhan [mailto:ad...@medent.com] 
Sent: Wednesday, February 3, 2021 8:50 AM
To: Magnus HAGDORN
Cc: ceph-users
Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2

Isn't this somewhat reliant on the OSD type? 

Redhat/Micron/Samsung/Supermicro have all put out white papers backing the idea 
of 2 copies on NVMe's as safe for production. 


From: "Magnus HAGDORN"  
To: pse...@avalon.org.ua 
Cc: "ceph-users"  
Sent: Wednesday, February 3, 2021 4:43:08 AM 
Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2 

On Wed, 2021-02-03 at 09:39 +, Max Krasilnikov wrote: 
> > if a OSD becomes unavailble (broken disk, rebooting server) then 
> > all 
> > I/O to the PGs stored on that OSD will block until replication 
> > level of 
> > 2 is reached again. So, for a highly available cluster you need a 
> > replication level of 3 
> 
> 
> AFAIK, with min_size 1 it is possible to write even to only active 
> OSD serving 
> 
yes, that's correct but then you seriously risk trashing your data 

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336. 
___ 
ceph-users mailing list -- ceph-users@ceph.io 
To unsubscribe send an email to ceph-users-le...@ceph.io 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Martin Verges
Hello Adam,

2 copies are save, min size 1 is not.
As long as there is no write while one copy is missing, you can
recover from that or from the unavailable copy when it comes online
again.
If you have min size 1 and you therefore write data on a single copy,
no safety net will protect you.

In general we consider even a 2 copy setup not secure enough.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

Am Mi., 3. Feb. 2021 um 16:50 Uhr schrieb Adam Boyhan :
>
> Isn't this somewhat reliant on the OSD type?
>
> Redhat/Micron/Samsung/Supermicro have all put out white papers backing the 
> idea of 2 copies on NVMe's as safe for production.
>
>
> From: "Magnus HAGDORN" 
> To: pse...@avalon.org.ua
> Cc: "ceph-users" 
> Sent: Wednesday, February 3, 2021 4:43:08 AM
> Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2
>
> On Wed, 2021-02-03 at 09:39 +, Max Krasilnikov wrote:
> > > if a OSD becomes unavailble (broken disk, rebooting server) then
> > > all
> > > I/O to the PGs stored on that OSD will block until replication
> > > level of
> > > 2 is reached again. So, for a highly available cluster you need a
> > > replication level of 3
> >
> >
> > AFAIK, with min_size 1 it is possible to write even to only active
> > OSD serving
> >
> yes, that's correct but then you seriously risk trashing your data
>
> The University of Edinburgh is a charitable body, registered in Scotland, 
> with registration number SC005336.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Adam Boyhan
No problem. They have been around for quite some time now. Even speaking to the 
Ceph engineers over at supermicro while we spec'd our hardware, they agreed as 
well. 

[ https://www.supermicro.com/white_paper/white_paper_Ceph-Ultra.pdf | 
https://www.supermicro.com/white_paper/white_paper_Ceph-Ultra.pdf ] 

[ 
https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf
 | 
https://www.redhat.com/cms/managed-files/st-micron-ceph-performance-reference-architecture-f17294-201904-en.pdf
 ] 

[ 
https://www.samsung.com/semiconductor/global.semi/file/resource/2020/05/redhat-ceph-whitepaper-0521.pdf
 | 
https://www.samsung.com/semiconductor/global.semi/file/resource/2020/05/redhat-ceph-whitepaper-0521.pdf
 ] 






From: dhils...@performair.com 
To: "adamb"  
Cc: "ceph-users"  
Sent: Wednesday, February 3, 2021 10:57:38 AM 
Subject: RE: Worst thing that can happen if I have size= 2 

Adam; 

I'd like to see that / those white papers. 

I suspect what they're advocating is multiple OSD daemon processes per NVMe 
device. This is something which can improve performance. Though I've never done 
it, I believe you partition the device, and then create your OSD pointing at a 
partition. 

Thank you, 

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc. 
dhils...@performair.com 
www.PerformAir.com 

-Original Message- 
From: Adam Boyhan [mailto:ad...@medent.com] 
Sent: Wednesday, February 3, 2021 8:50 AM 
To: Magnus HAGDORN 
Cc: ceph-users 
Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2 

Isn't this somewhat reliant on the OSD type? 

Redhat/Micron/Samsung/Supermicro have all put out white papers backing the idea 
of 2 copies on NVMe's as safe for production. 


From: "Magnus HAGDORN"  
To: pse...@avalon.org.ua 
Cc: "ceph-users"  
Sent: Wednesday, February 3, 2021 4:43:08 AM 
Subject: [ceph-users] Re: Worst thing that can happen if I have size= 2 

On Wed, 2021-02-03 at 09:39 +, Max Krasilnikov wrote: 
> > if a OSD becomes unavailble (broken disk, rebooting server) then 
> > all 
> > I/O to the PGs stored on that OSD will block until replication 
> > level of 
> > 2 is reached again. So, for a highly available cluster you need a 
> > replication level of 3 
> 
> 
> AFAIK, with min_size 1 it is possible to write even to only active 
> OSD serving 
> 
yes, that's correct but then you seriously risk trashing your data 

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336. 
___ 
ceph-users mailing list -- ceph-users@ceph.io 
To unsubscribe send an email to ceph-users-le...@ceph.io 
___ 
ceph-users mailing list -- ceph-users@ceph.io 
To unsubscribe send an email to ceph-users-le...@ceph.io 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] reinstalling node with orchestrator/cephadm

2021-02-03 Thread Kenneth Waegeman

Hi all,

I'm running a 15.2.8 cluster using ceph orch with all daemons adopted to 
cephadm.


I tried reinstall an OSD node. Is there a way to make ceph orch/cephadm 
activate the devices on this node again, ideally automatically?


I tried running `cephadm ceph-volume -- lvm activate --all` but this has 
an error related to dmcrypt:



[root@osd2803 ~]# cephadm ceph-volume -- lvm activate --all
Using recent ceph image docker.io/ceph/ceph:v15
/usr/bin/podman:stderr --> Activating OSD ID 0 FSID 
697698fd-3fa0-480f-807b-68492bd292bf
/usr/bin/podman:stderr Running command: /usr/bin/mount -t tmpfs tmpfs 
/var/lib/ceph/osd/ceph-0
/usr/bin/podman:stderr Running command: /usr/bin/ceph-authtool 
/var/lib/ceph/osd/ceph-0/lockbox.keyring --create-keyring --name 
client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf --add-key 
AQAy7Bdg0jQsBhAAj0gcteTEbcpwNNvMGZqTTg==
/usr/bin/podman:stderr  stdout: creating 
/var/lib/ceph/osd/ceph-0/lockbox.keyring
/usr/bin/podman:stderr added entity 
client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf 
auth(key=AQAy7Bdg0jQsBhAAj0gcteTEbcpwNNvMGZqTTg==)
/usr/bin/podman:stderr Running command: /usr/bin/chown -R ceph:ceph 
/var/lib/ceph/osd/ceph-0/lockbox.keyring
/usr/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph 
--name client.osd-lockbox.697698fd-3fa0-480f-807b-68492bd292bf 
--keyring /var/lib/ceph/osd/ceph-0/lockbox.keyring config-key get 
dm-crypt/osd/697698fd-3fa0-480f-807b-68492bd292bf/luks
/usr/bin/podman:stderr  stderr: Error initializing cluster client: 
ObjectNotFound('RADOS object not found (error calling conf_read_file)',)
/usr/bin/podman:stderr -->  RuntimeError: Unable to retrieve dmcrypt 
secret

Traceback (most recent call last):
  File "/usr/sbin/cephadm", line 6111, in 
    r = args.func()
  File "/usr/sbin/cephadm", line 1322, in _infer_fsid
    return func()
  File "/usr/sbin/cephadm", line 1381, in _infer_image
    return func()
  File "/usr/sbin/cephadm", line 3611, in command_ceph_volume
    out, err, code = call_throws(c.run_cmd(), verbose=True)
  File "/usr/sbin/cephadm", line 1060, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host 
--net=host --entrypoint /usr/sbin/ceph-volume --privileged 
--group-add=disk -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e 
NODE_NAME=osd2803.banette.os -v /dev:/dev -v /run/udev:/run/udev -v 
/sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm 
docker.io/ceph/ceph:v15 lvm activate --all


The OSDs are encrypted indeed. `cephadm ceph-volume lvm list` and 
`cephadm shell ceph -s` run just fine, and if I run ceph-volume 
directly, the same command works, but then of course the daemons are 
started in the legacy way again, not in containers.


Is there another way trough the 'ceph orch' to achieve this? Or if 
`cephadm ceph-volume -- lvm activate --all` would be the way to go here, 
I'm probably seeing a bug here ?


Thanks!!

Kenneth



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Simon Ironside

On 03/02/2021 09:24, Mario Giammarco wrote:

Hello,
Imagine this situation:
- 3 servers with ceph
- a pool with size 2 min 1

I know perfectly the size 3 and min 2 is better.
I would like to know what is the worst thing that can happen:


Hi Mario,

This thread is worth a read, it's an oldie but a goodie:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014846.html

Especially this post, which helped me understand the importance of 
min_size=2


http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014892.html

Cheers,
Simon
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Mario Giammarco
Thanks Simon and thanks to other people that have replied.
Sorry but I try to explain myself better.
It is evident to me that if I have two copies of data, one brokes and while
ceph creates again a new copy of the data also the disk with the second
copy brokes you lose the data.
It is obvious and a bit paranoid because many servers on many customers run
on raid1 and so you are saying: yeah you have two copies of the data but
you can broke both. Consider that in ceph recovery is automatic, with raid1
some one must manually go to the customer and change disks. So ceph is
already an improvement in this case even with size=2. With size 3 and min 2
it is a bigger improvement I know.

What I ask is this: what happens with min_size=1 and split brain, network
down or similar things: do ceph block writes because it has no quorum on
monitors? Are there some failure scenarios that I have not considered?
Thanks again!
Mario



Il giorno mer 3 feb 2021 alle ore 17:42 Simon Ironside <
sirons...@caffetine.org> ha scritto:

> On 03/02/2021 09:24, Mario Giammarco wrote:
> > Hello,
> > Imagine this situation:
> > - 3 servers with ceph
> > - a pool with size 2 min 1
> >
> > I know perfectly the size 3 and min 2 is better.
> > I would like to know what is the worst thing that can happen:
>
> Hi Mario,
>
> This thread is worth a read, it's an oldie but a goodie:
>
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014846.html
>
> Especially this post, which helped me understand the importance of
> min_size=2
>
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014892.html
>
> Cheers,
> Simon
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Dan van der Ster
Ceph is multiple factors more risky with min_size 1 than good old raid1:

With raid1, having disks A and B, when disk A fails, you start recovery to
a new disk A'. If disk B fails during recovery then you have a disaster.

With Ceph, we have multiple servers and multiple disks: When an OSD fails
and you replace it, it starts recovering. During that recovery time, if
roughly *any* other disk in the cluster fails then you have a disaster.

That's the basic argument.

In more detail, OSDs are aware of a sort of "last written to" state of the
PGs on all their peers. If an OSD goes down briefly then restarts, it first
learns the PG states of its peers and starts recovering those missed
writes. The recovering OSD will not be able to serve any IO until it has
recovered the objects to their latest states. So... If any of those peers
have any sort of problem during the recovery process, your cluster will be
down. "Down" in this case means precisely that the PG will be marked
incomplete and IO will be blocked until all needed OSDs are up and running.
Experts here know how to revive a cluster in that state, accepting then
dealing with arbitrary data loss, but ceph won't do that "dangerous"
recovery automatically for obvious reasons.

Here's another reference (from Wido again) that i hope will scare you away
from min_size 1:
https://www.slideshare.net/mobile/ShapeBlue/wido-den-hollander-10-ways-to-break-your-ceph-cluster

Lastly, if you can't afford 3x replicas, then use 2+2 erasure coding if
possible.

Cheers, Dan
















On Wed, Feb 3, 2021, 8:49 PM Mario Giammarco  wrote:

> Thanks Simon and thanks to other people that have replied.
> Sorry but I try to explain myself better.
> It is evident to me that if I have two copies of data, one brokes and while
> ceph creates again a new copy of the data also the disk with the second
> copy brokes you lose the data.
> It is obvious and a bit paranoid because many servers on many customers run
> on raid1 and so you are saying: yeah you have two copies of the data but
> you can broke both. Consider that in ceph recovery is automatic, with raid1
> some one must manually go to the customer and change disks. So ceph is
> already an improvement in this case even with size=2. With size 3 and min 2
> it is a bigger improvement I know.
>
> What I ask is this: what happens with min_size=1 and split brain, network
> down or similar things: do ceph block writes because it has no quorum on
> monitors? Are there some failure scenarios that I have not considered?
> Thanks again!
> Mario
>
>
>
> Il giorno mer 3 feb 2021 alle ore 17:42 Simon Ironside <
> sirons...@caffetine.org> ha scritto:
>
> > On 03/02/2021 09:24, Mario Giammarco wrote:
> > > Hello,
> > > Imagine this situation:
> > > - 3 servers with ceph
> > > - a pool with size 2 min 1
> > >
> > > I know perfectly the size 3 and min 2 is better.
> > > I would like to know what is the worst thing that can happen:
> >
> > Hi Mario,
> >
> > This thread is worth a read, it's an oldie but a goodie:
> >
> >
> >
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014846.html
> >
> > Especially this post, which helped me understand the importance of
> > min_size=2
> >
> >
> >
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014892.html
> >
> > Cheers,
> > Simon
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using RBD to pack billions of small files

2021-02-03 Thread Loïc Dachary
Hi Matt,

I did not know about pixz, thanks for the pointer. The idea it implements is 
also new to me and it looks like it can
usefully be applied to this use case. I'm not going to say "awesome" because I 
can't grasp how useful it really is
right now. But I'll definitely think about it :-)

Cheers

On 03/02/2021 22:02, Matt Wilder wrote:
> If it were me, I would do something along the lines of:
>
> - Bundle larger blocks of code into pixz
>  (essentially
> indexed tar files, allowing random access) and store them in RadosGW.
> - Build a small frontend that fetches (with caching) them and provides the
> file contents via whatever your UI is.
>
> On Wed, Feb 3, 2021 at 12:55 AM Burkhard Linke <
> burkhard.li...@computational.bio.uni-giessen.de> wrote:
>
>> Hi,
>>
>> On 2/3/21 9:41 AM, Loïc Dachary wrote:
 Just my 2 cents:

 You could use the first byte of the SHA sum to identify the image, e.g.
>> using a fixed number of 256 images. Or some flexible approach similar to
>> the way filestore used to store rados objects.
>>> A friend suggested the same to save space. Good idea.
>>
>> If you want to further reduce the index size, you can just store the
>> offset, and the first 4? 8? bytes at that offset define the size of the
>> following artifacts. That's similar to the way Pascal used to store
>> strings in the good ol' times. You might also want to think about using
>> a complete header which also includes the artifact's name etc. This will
>> allow you to rebuild the index if it becomes corrupted. The storage
>> overhead should be insignificant
>>
>> Your index will become a simple mapping of SHA sum -> offset, and you
>> might also be able to use optimized implementations.
>>
>>
>> Regards,
>>
>> Burkhard
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>

-- 
Loïc Dachary, Artisan Logiciel Libre




OpenPGP_signature
Description: OpenPGP digital signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-03 Thread Simon Ironside



On 03/02/2021 19:48, Mario Giammarco wrote:
It is obvious and a bit paranoid because many servers on many customers 
run on raid1 and so you are saying: yeah you have two copies of the data 
but you can broke both. Consider that in ceph recovery is automatic, 
with raid1 some one must manually go to the customer and change disks. 
So ceph is already an improvement in this case even with size=2. With 
size 3 and min 2 it is a bigger improvement I know.


To labour Dan's point a bit further, maybe a RAID5/6 analogy is better 
than RAID1. Yes, I know we're not talking erasure coding pools here but 
this is similar to the reasons why people moved from RAID5 (size=2, kind 
of) to RAID6 (size=3, kind of). I.e. the more disks you have in an array 
(cluster, in our case) and the bigger those disks are, the greater the 
chance you have of encountering a second problem during a recovery.


What I ask is this: what happens with min_size=1 and split brain, 
network down or similar things: do ceph block writes because it has no 
quorum on monitors? Are there some failure scenarios that I have not 
considered?


It sounds like in your example you would have 3 physical servers in 
total. So would you have both a monitor and OSDs processes on each server?


If so, it's not really related to min_size=1 but to answer your question 
you could lose one monitor and the cluster would continue. Losing a 
second monitor will stop your cluster until this is resolved. In your 
example setup (with colocated mons & OSDs) this would presumably also 
mean you'd lost two OSDs servers too so you'd have bigger problems.


HTH,
Simon
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] mon db high iops

2021-02-03 Thread Seena Fallah
Hi all,

My monitor nodes are getting up and down because of paxos lease timeout and
there is a high iops (2k iops) and 500MB/s throughput on
/var/lib/ceph/mon/ceph.../store.db/.
My cluster is in a recovery state and there is a bunch of degraded pgs on
my cluster.

It seems it's doing a 200k block size io on rocksdb. Is that okay?!
Also is there any solution to fix these downtimes for monitors?

Thanks for your help!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using RBD to pack billions of small files

2021-02-03 Thread Matt Wilder
If it were me, I would do something along the lines of:

- Bundle larger blocks of code into pixz
 (essentially
indexed tar files, allowing random access) and store them in RadosGW.
- Build a small frontend that fetches (with caching) them and provides the
file contents via whatever your UI is.

On Wed, Feb 3, 2021 at 12:55 AM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:

> Hi,
>
> On 2/3/21 9:41 AM, Loïc Dachary wrote:
> >> Just my 2 cents:
> >>
> >> You could use the first byte of the SHA sum to identify the image, e.g.
> using a fixed number of 256 images. Or some flexible approach similar to
> the way filestore used to store rados objects.
> > A friend suggested the same to save space. Good idea.
>
>
> If you want to further reduce the index size, you can just store the
> offset, and the first 4? 8? bytes at that offset define the size of the
> following artifacts. That's similar to the way Pascal used to store
> strings in the good ol' times. You might also want to think about using
> a complete header which also includes the artifact's name etc. This will
> allow you to rebuild the index if it becomes corrupted. The storage
> overhead should be insignificant
>
> Your index will become a simple mapping of SHA sum -> offset, and you
> might also be able to use optimized implementations.
>
>
> Regards,
>
> Burkhard
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

-- 


This e-mail and all information in, attached to, or linked via this 
e-mail (together the ‘e-mail’) is confidential and may be legally 
privileged. It is intended solely for the intended addressee(s). Access to, 
or any onward transmission, of this e-mail by any other person is not 
authorised. If you are not the intended recipient, you are requested to 
immediately alert the sender of this e-mail and to immediately delete this 
e-mail. Any disclosure in any form of all or part of this e-mail, or of any 
the parties to it, including any copying, distribution or any action taken 
or omitted to be taken in reliance on it, is prohibited and may be 
unlawful. 




This e-mail is not, and is not intended to be, and should 
not be construed as being, (a) any offer, solicitation, or promotion of any 
kind; (b) the basis of any investment or other decision(s);  (c) any 
recommendation to buy, sell or transact in any manner any good(s), 
product(s) or service(s), nor engage in any investment(s) or other 
transaction(s) or activities;  or (d) the provision of, or related to, any 
advisory service(s) or activities, including regarding any investment, tax, 
legal, financial, accounting, consulting or any other related service(s).
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: replace OSD without PG remapping

2021-02-03 Thread Tony Liu
Thank you Frank, "degradation is exactly what needs to be
avoided/fixed at all cost", clear and loud, point is taken!
I didn't actually quite get it last time. I used to think
degradation would be OK, but now, I agree with you, that is
not OK at all for production storage.
Appreciate your patience!

Tony
> -Original Message-
> From: Frank Schilder 
> Sent: Tuesday, February 2, 2021 11:47 PM
> To: Tony Liu ; ceph-users@ceph.io
> Subject: Re: replace OSD without PG remapping
> 
> You asked about exactly this before:
> https://lists.ceph.io/hyperkitty/list/ceph-
> us...@ceph.io/thread/IGYCYJTAMBDDOD2AQUCJQ6VSUWIO4ELW/#ZJU3555Z5WQTJDPCT
> MPZ6LOFTIUKKQUS
> 
> It is not possible to avoid remapping, because if the PGs are not
> remapped you would have degraded redundancy. In any storage system, this
> degradation is exactly what needs to be avoided/fixed at all cost.
> 
> I don't see an issue with health status messages issued by self-healing.
> That's the whole point of ceph, just let it do its job and don't get
> freaked out by health_warn.
> 
> You can, however try to keep the window of rebalancing short and this is
> exactly what was discussed in the thread above already. As is pointed
> out there as well, even this is close to pointless. Just deploy a few
> more disks than you need, let the broken ones go and be happy that ceph
> is taking care of the rest and even tells you about its progress.
> 
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> 
> 
> From: Tony Liu 
> Sent: 03 February 2021 03:10:26
> To: ceph-users@ceph.io
> Subject: [ceph-users] replace OSD without PG remapping
> 
> Hi,
> 
> There are multiple different procedures to replace an OSD.
> What I want is to replace an OSD without PG remapping.
> 
> #1
> I tried "orch osd rm --replace", which sets OSD reweight 0 and status
> "destroyed". "orch osd rm status" shows "draining".
> All PGs on this OSD are remapped. Checked "pg dump", can't find this OSD
> any more.
> 
> 1) Given [1], setting weight 0 seems better than setting reweight 0.
> Is that right? If yes, should we change the behavior of "orch osd rm --
> replace"?
> 
> 2) "ceph status" doesn't show anything about OSD draining.
> Is there any way to see the progress of draining?
> Is there actually copy happening? The PG on this OSD is remapped and
> copied to another OSD, right?
> 
> 3) When OSD is replaced, there will be remapping and backfilling.
> 
> 4) There is remapping in #2 and remapping again in #3.
> I want to avoid it.
> 
> #2
> Is there any procedure that doesn't mark OSD out (set reweight 0),
> neither set weight 0, which should keep PG map unchanged, but just warn
> about less redundancy (one out of 3 OSDs of PG is down), and when OSD is
> replaced, no remapping, just data backfilling?
> 
> [1] https://ceph.com/geen-categorie/difference-between-ceph-osd-
> reweight-and-ceph-osd-crush-reweight/
> 
> 
> Thanks!
> Tony
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using RBD to pack billions of small files

2021-02-03 Thread Loïc Dachary
Hi Frederico,

On 04/02/2021 05:51, Federico Lucifredi wrote:
> Hi Loïc,
>    I am intrigued, but am missing something: why not using RGW, and store the 
> source code files as objects? RGW has native compression and can take care of 
> that behind the scenes.
Excellent question!
>
>    Is the desire to use RBD only due to minimum allocation sizes?
I *assume* that since RGW does have specific strategies to take advantage of 
the fact that objects are immutable and will never be removed:

* It will be slower to add artifacts in RGW than in an RBD image + index
* The metadata in RGW will be larger than an RBD image + index

However I have not verified this and if you have an opinion I'd love to hear it 
:-)

Cheers
>
>    Best -F
>   
>
> -- "'Problem' is a bleak word for challenge" - Richard Fish
> _
> Federico Lucifredi
> Product Management Director, Ceph Storage Platform
> Red Hat
> A273 4F57 58C0 7FE8 838D 4F87 AEEB EC18 4A73 88AC
> redhat.com    TRIED. TESTED. TRUSTED.  
>
>
> On Sat, Jan 30, 2021 at 10:01 AM Loïc Dachary  > wrote:
>
> Bonjour,
>
> In the context Software Heritage (a noble mission to preserve all source 
> code)[0], artifacts have an average size of ~3KB and there are billions of 
> them. They never change and are never deleted. To save space it would make 
> sense to write them, one after the other, in an every growing RBD volume 
> (more than 100TB). An index, located somewhere else, would record the offset 
> and size of the artifacts in the volume.
>
> I wonder if someone already implemented this idea with success? And if 
> not... does anyone see a reason why it would be a bad idea?
>
> Cheers
>
> [0] https://docs.softwareheritage.org/ 
> 
>
> -- 
> Loïc Dachary, Artisan Logiciel Libre
>
>
>
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io 
> To unsubscribe send an email to ceph-users-le...@ceph.io 
> 
>

-- 
Loïc Dachary, Artisan Logiciel Libre




OpenPGP_signature
Description: OpenPGP digital signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io