[ceph-users] Convert existing folder on cephfs into subvolume

2022-06-07 Thread Stolte, Felix
Hey guys,

we are using the ceph filesystem since Luminous and exporting subdirectories 
via samba as well as nfs. We did upgrade to Pacific and want to use the 
subvolume feature. Is it possible to convert a subdirectory into a subvolume 
without using data?

Best regards
Felix

-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior
-
-

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Convert existing folder on cephfs into subvolume

2022-06-07 Thread Milind Changire
You could set an xattr on the dir of your choice to convert it to a
subvolume.
eg.
# setfattr -n ceph.dir.subvolume -v 1 my/favorite/dir/is/now/a/subvol1

You can also disable the subvolume feature by setting the xattr value to 0
(zero)

But there are constraints on a subvolume dir, namely:
* you can't move/create one subvolume dir under another
* you can't hard-link a file across subvolumes
* you can't create snapshots on dirs below the subvolume dir, only at
subvolume root
* if subvolume xattr is set to 1 on parent dir of subvolume dir, then the
parent dir now enforces all constraints



On Tue, Jun 7, 2022 at 12:39 PM Stolte, Felix 
wrote:

> Hey guys,
>
> we are using the ceph filesystem since Luminous and exporting
> subdirectories via samba as well as nfs. We did upgrade to Pacific and want
> to use the subvolume feature. Is it possible to convert a subdirectory into
> a subvolume without using data?
>
> Best regards
> Felix
>
>
> -
>
> -
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Dr. Astrid Lambrecht, Prof. Dr. Frauke Melchior
>
> -
>
> -
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>

-- 
Milind
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-07 Thread Eugen Block

Hi,

please share the output of 'ceph osd pool autoscale-status'. You have  
very low (too low) PG numbers per OSD (between 0 and 6), did you stop  
the autoscaler at an early stage? If you don't want to use the  
autoscaler you should increase the pg_num, but you could set  
autoscaler to warn mode and see what it suggests.



Zitat von Christophe BAILLON :


Hi all

I got many error about PG deviation more than 30% on a new installed cluster

This cluster is managed by cephadm

all box 15 have :
12 x 18Tb
2 x nvme
2 x ssd for boot

Our main pool is on EC 6 + 2 for exclusive use with cephfs
created with this method :
ceph orch apply -i osd_spec.yaml

with this conf

osd_spec.yaml
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
db_devices:
  paths:
- /dev/nvme0n1
- /dev/nvme1n1

root@store-par2-node01:/home/user# ceph -s
  cluster:
id: cf37418e-e0b9-11ec-95a5-f1f73d5801cb
health: HEALTH_OK

  services:
mon: 5 daemons, quorum  
store-par2-node01,store-par2-node02,store-par2-node03,store-par2-node04,store-par2-node05 (age  
5d)
mgr: store-par2-node02.osbvrb(active, since 3d), standbys:  
store-par2-node01.oerpqs

mds: 1/1 daemons up, 3 standby
osd: 168 osds: 168 up (since 3d), 168 in (since 5d)

  data:
volumes: 1/1 healthy
pools:   3 pools, 65 pgs
objects: 27 objects, 14 MiB
usage:   13 TiB used, 2.7 PiB / 2.7 PiB avail
pgs: 65 active+clean

root@store-par2-node01:/home/user# ceph osd df tree
ID   CLASS  WEIGHT  REWEIGHT  SIZE RAW USE  DATA OMAP 
 META AVAIL%USE  VAR   PGS  STATUS  TYPE NAME
 -1 2763.38159 -  2.7 PiB   13 TiB  9.6 GiB9 KiB  
 9.7 GiB  2.7 PiB  0.47  1.00-  root default
 -3  197.38440 -  197 TiB  955 GiB  696 MiB7 KiB  
 608 MiB  196 TiB  0.47  1.00-  host store-par2-node01
  0hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  60 MiB   16 TiB  0.47  1.002  up  osd.0
  1hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  55 MiB   16 TiB  0.47  1.001  up  osd.1
  2hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  55 MiB   16 TiB  0.47  1.001  up  osd.2
  3hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  46 MiB   16 TiB  0.47  1.005  up  osd.3
  4hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  45 MiB   16 TiB  0.47  1.002  up  osd.4
  5hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  45 MiB   16 TiB  0.47  1.001  up  osd.5
  6hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  45 MiB   16 TiB  0.47  1.000  up  osd.6
  7hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  46 MiB   16 TiB  0.47  1.001  up  osd.7
  8hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  47 MiB   16 TiB  0.47  1.004  up  osd.8
  9hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  49 MiB   16 TiB  0.47  1.002  up  osd.9
 10hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  55 MiB   16 TiB  0.47  1.004  up  osd.10
 11hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  60 MiB   16 TiB  0.47  1.003  up  osd.11
 -7  197.38440 -  197 TiB  955 GiB  696 MiB2 KiB  
 625 MiB  196 TiB  0.47  1.00-  host store-par2-node02
 13hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.005  up  osd.13
 15hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.002  up  osd.15
 17hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.003  up  osd.17
 19hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.002  up  osd.19
 21hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.003  up  osd.21
 23hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.001  up  osd.23
 25hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.003  up  osd.25
 27hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.001  up  osd.27
 29hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.001  up  osd.29
 31hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B  
  54 MiB   16 TiB  0.47  1.004  up  osd.31
 33hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB  
  43 

[ceph-users] ceph orch: list of scheduled tasks

2022-06-07 Thread Patrick Vranckx

Hi,

When you change the configuration of your cluster whith 'ceph orch apply 
..." or "ceph orch daemon ...", tasks are scheduled:


[root@cephc003 ~]# ceph orch apply mgr --placement="cephc001 cephc002 
cephc003"

Scheduled mgr update...

Is there a way to list all the pending tasks ?

Regards,

Patrick

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: unknown object

2022-06-07 Thread J. Eric Ivancich
There could be a couple of things going on here. When you copy an object to a 
new bucket, it creates what’s widely known as a “shallow” copy. The head object 
gets a true copy, but all tail objects are shared between the two copies.

There could also be occasional bugs or somehow an object delete is interrupted 
that result in unneeded objects that are referred to as “orphans” being left 
over.

You didn’t specify which version you’re running, but most recent versions of 
ceph come with a tool called `rgw-orphan-list`. It’s a shell script, so if 
you’re familiar with scripting you can examine how it works. It’s designed to 
list the orphans but not the shared tail objects. Many ceph users use that tool 
to list the orphans and then delete those directly from the rados pool, 
although please realize it’s still considered “experimental”.

Eric
(he/him)

> On Jun 6, 2022, at 7:14 AM, farhad kh  wrote:
> 
> i deleted all object in my bucket but used capacity not zero
> when i list object in pool wit `rados -p default.rgw.buckets.data.ls` shows
> me a lot of objects
> 
> 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__shadow_1/16Q91ZUY34EAW9TH.2~zOHhukByW0DKgDIIihOEhtxtW85FO5m.74_1
> 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__shadow_1/PRZEDHF9NSTRGG9G.2~-3Kdywfa6qNjy0j8JaKF8XbwR2e7HPQ.17_1
> 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__shadow_1/YZN8L9MDGZRTAO3F.2~ygKOynlKPsHC23k53N3MtsybuIJgpZa.92_1
> 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__multipart_1/A9JR4TZHBU5EITOV.2~3i2aUR5RIVEHlnZuyAVLf_eSzlziTtq.99
> 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__multipart_1/YZN8L9MDGZRTAO3F.2~ygKOynlKPsHC23k53N3MtsybuIJgpZa.58
> 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__shadow_1/ODVRPVIRSCIQBKRD.2~4ipoaspJ-8RdWU8R6GC9DT4cOOdwBGl.80_1
> 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__shadow_1/1IG3IWQUTAKWW6MI.2~MuhYMb1HsKBU73ZOC7Xpb7ZBHQ_1qrK.41_1
> 
> How are these objects created? And how to delete them even though they are
> not in the bucket list?

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: not so empty bucket

2022-06-07 Thread J. Eric Ivancich
You’ve provided convincing evidence that the bucket index is not correctly 
reflecting the data objects. So the next step would be to remove the bucket 
index entries for these 39 objects.

It looks like you’ve already mapped which entries go to which bucket index 
shards (or you could redo your commands to get that info). So you could use 
`rados rmomapkey…` to get rid of those. Sometimes bucket index entries have 
non-printable characters in their keys, which makes it challenging to provide 
the key on the command-line. In such cases you can instead put the key in a 
file and then use the "--omap-key-file” command-line option to refer to that 
file.

I realize this is a pain and I’m sorry that you have to go through this. But 
once you remove those bucket index entries you should be all set.

Eric
(he/him)

> On May 9, 2022, at 4:20 PM, Christopher Durham  wrote:
> 
> 
> I am using pacific 16.2.7 on rocket 8.5 Linux 8.5.
> I have a once heavily used radosgw bucket that is now empty. Let's call it 
> 'oldbucket' awscli now shows that there are no objects in the bucket.
> 
> However, radosgw-admin bucket stats --bucket oldbucket shows num_objects in 
> rgw.main as 39 objects, with about 200 gigs used in the size field.
> 
> radosgw-admin bucket check --bucket oldbucket indeed shows 39 objects in a 
> list.
> Each of these objects are of the form:
> _multipart_originalobjectname.someoriginalid.NN
> radosgw-admin bucket radoslist --bucket oldbucket shows only 9 objects, but 
> thoseobjects are alll included as part of the bucket check command above.
> This particular bucket did have alot of multipart uploads. All multipart 
> uploads have beenaborted with awscli. And of course awscli shows no objects 
> in the bucket
> 
> radosgw-admin bucket list --bucket oldbucket shows all 39 objects, which is 
> weird thatI see object names which once were multipart object parts.
> 
> None of the 39 objects exist in the pool (rados -p $pool get $obj /tmp/$obj 
> all return 1)
> If I list all the index objects for this bucket in the index pool and then do 
> a listomapkeys for each of the index objects,I see only the 39 omapkeys.
> So my question is, what do I need to do to fix this bucket (without delting 
> and recreating it?) Would just doing a rmomapkeyon each of the omap keys in 
> the listomapkeys output solve my problem, and reclaim the 200 Gigs?
> Do I have to rebuild the index somehow (--fix in bucket check above did 
> nothing).
> 
> Thanks for any thoughts/insight.
> -Chris
> 
> 
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Many errors about PG deviate more than 30% on a new cluster deployed by cephadm

2022-06-07 Thread Christophe BAILLON
Hello,

thanks for your reply

No did not stop autoscaler

root@store-par2-node01:/home/user# ceph osd pool autoscale-status
POOL   SIZE  TARGET SIZERATE  RAW CAPACITY   RATIO  
TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
.mgr 15488k  3.0 2763T  0.  
1.0   1  on False
cephfs_data  01.333730697632 2763T  0.  
1.0  32  on False
cephfs_metadata   5742   3.0 2763T  0.  
4.0  32  on False

I not in production, I can destroy the pool and recreate it, or destroy the 
cluster and rebuild it


- Mail original -
> De: "Eugen Block" 
> À: "ceph-users" 
> Envoyé: Mardi 7 Juin 2022 15:00:39
> Objet: [ceph-users] Re: Many errors about PG deviate more than 30% on a new 
> cluster deployed by cephadm

> Hi,
> 
> please share the output of 'ceph osd pool autoscale-status'. You have
> very low (too low) PG numbers per OSD (between 0 and 6), did you stop
> the autoscaler at an early stage? If you don't want to use the
> autoscaler you should increase the pg_num, but you could set
> autoscaler to warn mode and see what it suggests.
> 
> 
> Zitat von Christophe BAILLON :
> 
>> Hi all
>>
>> I got many error about PG deviation more than 30% on a new installed cluster
>>
>> This cluster is managed by cephadm
>>
>> all box 15 have :
>> 12 x 18Tb
>> 2 x nvme
>> 2 x ssd for boot
>>
>> Our main pool is on EC 6 + 2 for exclusive use with cephfs
>> created with this method :
>> ceph orch apply -i osd_spec.yaml
>>
>> with this conf
>>
>> osd_spec.yaml
>> service_type: osd
>> service_id: osd_spec_default
>> placement:
>>   host_pattern: '*'
>> data_devices:
>>   rotational: 1
>> db_devices:
>>   paths:
>> - /dev/nvme0n1
>> - /dev/nvme1n1
>>
>> root@store-par2-node01:/home/user# ceph -s
>>   cluster:
>> id: cf37418e-e0b9-11ec-95a5-f1f73d5801cb
>> health: HEALTH_OK
>>
>>   services:
>> mon: 5 daemons, quorum
>> store-par2-node01,store-par2-node02,store-par2-node03,store-par2-node04,store-par2-node05
>> (age
>> 5d)
>> mgr: store-par2-node02.osbvrb(active, since 3d), standbys:
>> store-par2-node01.oerpqs
>> mds: 1/1 daemons up, 3 standby
>> osd: 168 osds: 168 up (since 3d), 168 in (since 5d)
>>
>>   data:
>> volumes: 1/1 healthy
>> pools:   3 pools, 65 pgs
>> objects: 27 objects, 14 MiB
>> usage:   13 TiB used, 2.7 PiB / 2.7 PiB avail
>> pgs: 65 active+clean
>>
>> root@store-par2-node01:/home/user# ceph osd df tree
>> ID   CLASS  WEIGHT  REWEIGHT  SIZE RAW USE  DATA OMAP
>>  META AVAIL%USE  VAR   PGS  STATUS  TYPE NAME
>>  -1 2763.38159 -  2.7 PiB   13 TiB  9.6 GiB9 KiB
>>  9.7 GiB  2.7 PiB  0.47  1.00-  root default
>>  -3  197.38440 -  197 TiB  955 GiB  696 MiB7 KiB
>>  608 MiB  196 TiB  0.47  1.00-  host store-par2-node01
>>   0hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B
>>   60 MiB   16 TiB  0.47  1.002  up  osd.0
>>   1hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B
>>   55 MiB   16 TiB  0.47  1.001  up  osd.1
>>   2hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B
>>   55 MiB   16 TiB  0.47  1.001  up  osd.2
>>   3hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB
>>   46 MiB   16 TiB  0.47  1.005  up  osd.3
>>   4hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB
>>   45 MiB   16 TiB  0.47  1.002  up  osd.4
>>   5hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB
>>   45 MiB   16 TiB  0.47  1.001  up  osd.5
>>   6hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB
>>   45 MiB   16 TiB  0.47  1.000  up  osd.6
>>   7hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB
>>   46 MiB   16 TiB  0.47  1.001  up  osd.7
>>   8hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB
>>   47 MiB   16 TiB  0.47  1.004  up  osd.8
>>   9hdd16.44870   1.0   16 TiB   80 GiB   58 MiB1 KiB
>>   49 MiB   16 TiB  0.47  1.002  up  osd.9
>>  10hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B
>>   55 MiB   16 TiB  0.47  1.004  up  osd.10
>>  11hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B
>>   60 MiB   16 TiB  0.47  1.003  up  osd.11
>>  -7  197.38440 -  197 TiB  955 GiB  696 MiB2 KiB
>>  625 MiB  196 TiB  0.47  1.00-  host store-par2-node02
>>  13hdd16.44870   1.0   16 TiB   80 GiB   58 MiB  0 B
>>   54 MiB   16 TiB  0.47  1.00 

[ceph-users] Re: ceph orch: list of scheduled tasks

2022-06-07 Thread Adam King
For most of them there isn't currently. Part of the issue with it is that
the tasks don't ever necessarily end. If you apply a mgr spec, cephadm will
periodically check the spec against what it sees (e.g. where mgr daemons
are currently located vs. where the spec says they should be) and make
corrections  if necessary (such as deploying a new mgr daemon so that
reality matches the spec). So, storing a proper progress item for the mgr
update becomes difficult. In a sense, the specs shown in "ceph orch ls" are
perpetually pending tasks, due to the declarative nature in which they
operate. A bit more on what I'm saying here if you're interested
https://docs.ceph.com/en/quincy/cephadm/services/#algorithm-description.

Thanks,
  - Adam King

On Tue, Jun 7, 2022 at 9:34 AM Patrick Vranckx 
wrote:

> Hi,
>
> When you change the configuration of your cluster whith 'ceph orch apply
> ..." or "ceph orch daemon ...", tasks are scheduled:
>
> [root@cephc003 ~]# ceph orch apply mgr --placement="cephc001 cephc002
> cephc003"
> Scheduled mgr update...
>
> Is there a way to list all the pending tasks ?
>
> Regards,
>
> Patrick
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] 270.98 GB was requested for block_db_size, but only 270.98 GB can be fulfilled

2022-06-07 Thread Torkil Svensgaard

Hi

We are converting unmanaged OSDs from db/wal on SSD to managed OSDs with 
db/wal on NVMe. The boxes had 20 HDDs and 4 SSDs and will be changed to 
22 HDDs, 2 SSDs and 2 NVMes, with 11 db/wal partitions on each NVMe for 
the HDDs. The old SSDs will be used for a flash pool.


We calculated the block_db_size for the OSDs with db/wal on NVMe as 
total bytes / 11 and rounded down, expecting that to fit:


3200631791616 / 11 = 290.966.526.510,5455

Service spec:

"
service_type: osd
service_id: slow
service_name: osd.slow
placement:
  hosts:
  - doc
  - dopey
  - happy
  - klutzy
  - lazy
  - sneezy
  - smiley
spec:
  block_db_size: 290966526510
  data_devices:
rotational: 1
  db_devices:
rotational: 0
size: '1000G:'
  filter_logic: AND
  objectstore: bluestore
"

However, the orchestrator/ceph-volume will only fit 10:

"
# lsblk -b /dev/nvme0n1
NAME 
   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1 
   259:00 3200631791616  0 disk

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--298670ae--d218--4af9--8c61--04c93104190c
│ 
   253:21   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--9c7f60d0--4757--402a--a66f--3a2e38a3e172
│ 
   253:22   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--a30d86bd--dc69--44c9--9a95--893e3c55787f
│ 
   253:23   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--7ade6abc--691b--4bb2--a969--5491f4b31eb6
│ 
   253:24   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--66325755--6082--421d--9c33--9d13d758709d
│ 
   253:26   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--9ca23f86--ceba--4f91--b565--2bcdd4c66352
│ 
   253:28   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--1e7b4909--a44b--49e8--9891--31800c0df3ed
│ 
   253:35   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--e1701c65--8eec--4311--a274--2bbca17c1ac1
│ 
   253:42   0  290963062784  0 lvm

├─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--4af28722--4ba1--400e--aee5--28af3f2f80b5
│ 
   253:45   0  290963062784  0 lvm

└─ceph--20144705--e65d--4143--b917--c0469e54863c-osd--db--51ee229a--edb9--4c2a--8469--0cc006e9f45d

253:47   0  290963062784  0 lvm
"

"
/usr/bin/podman: stderr --> passed data devices: 21 physical, 0 LVM
/usr/bin/podman: stderr --> relative data size: 1.0
/usr/bin/podman: stderr --> passed block_db devices: 2 physical, 0 LVM
/usr/bin/podman: stderr --> 270.98 GB was requested for block_db_size, 
but only 270.98 GB can be fulfilled
/usr/bin/podman: stderr time="2022-06-08T07:39:45+02:00" level=warning 
msg="Container 
a96a3429fdf4487b738f6fb96534ff01697df019dd4893cd07cfc6361ccef26f: 
poststop hook 0: executing []: exit status 1"

Traceback (most recent call last):
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 8826, in 

main()
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 8814, in main

r = ctx.func(ctx)
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 1889, in _infer_config

return func(ctx)
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 1830, in _infer_fsid

return func(ctx)
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 1917, in _infer_image

return func(ctx)
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 1817, in _validate_fsid

return func(ctx)
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 5077, in command_ceph_volume

out, err, code = call_throws(ctx, c.run_cmd())
  File 
"/var/lib/ceph/8ee2d228-ed21-4580-8bbf-0649f229e21d/cephadm.29a5c075eabb1a183db073de0514a72a3722c1b95ce759660d20c1b077d27de0", 
line 1619, in call_throws

raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host 
--stop-signal=SIGTERM --authfile=/etc/ceph/podman-auth.json --net=host 
--entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init 
-e 
CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:7dc93a9627bf75b2fbfdde6b93d886d41f2f25f2026136e9a93d92de8c8913b9 
-e NODE_NAME=sneezy -e CEPH_USE_RANDOM_NONCE=1 -e 
CEPH_VOLUME_OSDSPEC_AFFINITY=slow -e 

[ceph-users] rbd deep copy in Luminous

2022-06-07 Thread Pardhiv Karri
Hi,

We are currently on Ceph Luminous version (12.2.11). I don't see the "rbd
deep cp" command in this version. Is it in a different version or release?
If so, which one? If in another release, Mimic or later, is there a way to
get it in Luminous?

Thanks,
Pardhiv
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io