[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread luuvuong91
Dear, in compute node, i have update ceph to version luminus 12.2.13 but not fix Plaese detail away fix it help me Thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Anthony D'Atri
Why not use rbd-mirror to handle the volumes? > On May 13, 2020, at 11:27 PM, Kees Meijs wrote: > > Hi Konstantin, > > Thank you very much. That's a good question. > > The implementations of OpenStack and Ceph and "the other" OpenStack and > Ceph are, apart from networking, completely separate

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-05-14 Thread Brad Hubbard
On Wed, May 13, 2020 at 6:00 PM Lomayani S. Laizer wrote: > > Hello, > > Below is full debug log of 2 minutes before crash of virtual machine. > Download from below url > > https://storage.habari.co.tz/index.php/s/31eCwZbOoRTMpcU This log has rbd debug output, but not rados :( I guess you'll ne

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Kees Meijs
Hi Anthony, Thanks as well. Well, it's a one-time job. K. On 14-05-2020 09:10, Anthony D'Atri wrote: > Why not use rbd-mirror to handle the volumes? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@cep

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Anthony D'Atri
So? > > Hi Anthony, > > Thanks as well. > > Well, it's a one-time job. > > K. > > On 14-05-2020 09:10, Anthony D'Atri wrote: >> Why not use rbd-mirror to handle the volumes? > ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread Eugen Block
Can you share what you have tried so far? It's unclear at which point it's failing so I'd suggest to stop the instances, restart nova-compute.service and then start instances again. Zitat von luuvuon...@gmail.com: Dear, in compute node, i have update ceph to version luminus 12.2.13 but not

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Kees Meijs
I need to mirror single RBDs while rbd-mirror: "mirroring is configured on a per-pool basis" (according documentation). On 14-05-2020 09:13, Anthony D'Atri wrote: > So? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to cep

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Eugen Block
You can also mirror on a per-image basis. Zitat von Kees Meijs : I need to mirror single RBDs while rbd-mirror: "mirroring is configured on a per-pool basis" (according documentation). On 14-05-2020 09:13, Anthony D'Atri wrote: So? ___ ceph-users

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread Zhenshi Zhou
What the command "ceph osd dump | grep min_compat_client" and "ceph features" output Eugen Block 于2020年5月14日周四 下午3:17写道: > Can you share what you have tried so far? It's unclear at which point > it's failing so I'd suggest to stop the instances, restart > nova-compute.service and then start inst

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Anthony D'Atri
It’s entirely possible — and documented — to mirror individual images. Your proposal to use snapshots is reinventing the wheel, but with less efficiency. https://docs.ceph.com/docs/nautilus/rbd/rbd-mirroring/#image-configuration ISTR that in Octopus the need for RBD journals is gone, but am no

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Konstantin Shalygin
On 5/14/20 1:27 PM, Kees Meijs wrote: Thank you very much. That's a good question. The implementations of OpenStack and Ceph and "the other" OpenStack and Ceph are, apart from networking, completely separate. Actually I was thinking you perform OpenStack and Ceph upgrade, not migration to oth

[ceph-users] Re: What is a pgmap?

2020-05-14 Thread Janne Johansson
Den ons 13 maj 2020 kl 22:37 skrev Bryan Henderson : > I'm surprised I couldn't find this explained anywhere (I did look), but ... > What is the pgmap and why does it get updated every few seconds on a tiny > cluster that's mostly idle? > > I was sure it was updated exactly once per second. > I

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Zhenshi Zhou
rbd-mirror can work on a single image in the pool. and I did a test on image copy from 13.2 to 14.2. however, the data new in the source image didn't copy to the destination image. I'm not sure if this is normal. Kees Meijs 于2020年5月14日周四 下午3:24写道: > I need to mirror single RBDs while rbd-mirror:

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread luuvuong91
Hi, Output of command above root@ceph07:~# ceph osd dump | grep min_compat_client require_min_compat_client luminous min_compat_client luminous root@ceph07:~# I try reduce to jewel but not success ___ ceph-users mailing list -- ceph-users@ceph.io To u

[ceph-users] Re: Memory usage of OSD

2020-05-14 Thread Janne Johansson
Den tors 14 maj 2020 kl 03:52 skrev Amudhan P : > For Ceph release before nautilus to effect osd_memory_target changes need > to restart OSD service. > I had similar issue in mimic I did the same in my test setup. > Before restarting OSD service ensure you set osd nodown and osd noout > similar

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread luuvuong91
Hi, I already reboot server of my cluster and reboot compute node but not fix I have update kerbel 1 compute node to 4.4 but not fix I have run command ceph osd crush tunables legacy but not fix ___ ceph-users mailing list -- ceph-users@ceph.io To unsub

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread luuvuong91
Hi, Out put of comand ceph root@ceph05:~# ceph features { "mon": { "group": { "features": "0x3ffddff8eeacfffb", "release": "luminous", "num": 4 } }, "osd": { "group": { "features": "0x3ffddff8eeacfffb",

[ceph-users] Re: Cluster network and public network

2020-05-14 Thread Janne Johansson
Den tors 14 maj 2020 kl 08:42 skrev lin yunfan : > Besides the recoverry scenario , in a write only scenario the cluster > network will use the almost the same bandwith as public network. > That would depend on the replication factor. If it is high, I would assume every MB from the client networ

[ceph-users] Re: Memory usage of OSD

2020-05-14 Thread Rafał Wądołowski
Mark, good news! Adam, if you need some more information or debug, feel free to contact me on IRC: xelexin I can confirm that this issue exist in luminous (12.2.12) Regards, Rafał Wądołowski CloudFerro sp. z o.o. ul. Fabryczna 5A 00-446 Warszawa www.cloudferro.com

[ceph-users] Re: OSD weight on Luminous

2020-05-14 Thread jesper
unless uou have enabled some balancing - then this is very normal (actually pretty good normal) Jesper Sent from myMail for iOS Thursday, 14 May 2020, 09.35 +0200 from Florent B. : >Hi, > >I have something strange on a Ceph Luminous cluster. > >All OSDs have the same size, the same weight,

[ceph-users] Re: Cluster network and public network

2020-05-14 Thread lin yunfan
That is correct.I didn't explain it clearly. I said that is because in some write only scenario the public network and cluster network will all be saturated the same time. linyunfan Janne Johansson 于2020年5月14日周四 下午3:42写道: > > Den tors 14 maj 2020 kl 08:42 skrev lin yunfan : >> >> Besides the rec

[ceph-users] Re: ACL for user in another teant

2020-05-14 Thread Vishwas Bm
Hi Pritha, Thanks for the reply. Please find the user list, bucket list and also the command which I have used. [root@vishwas-test cluster]# radosgw-admin user list [ "tenant2$Jerry", "tenant1$Tom" ] [root@vishwas-test cluster]# radosgw-admin bucket list [ "tenant2/jerry-bucket" ] [

[ceph-users] Re: ACL for user in another teant

2020-05-14 Thread Vishwas Bm
When I tried as below also, similar error is coming: [root@vishwas-test cluster]# s3cmd --access_key=GY40PHWVK40A2G4XQH2D --secret_key=bKq36rs5t1nZEL3MedAtDY3JCfBoOs1DEou0xfOk ls s3://tenant2/jerry-bucket ERROR: Bucket 'tenant2' does not exist ERROR: S3 error: 404 (NoSuchBucket) [root@vishwas-te

[ceph-users] Remove or recreate damaged PG in erasure coding pool

2020-05-14 Thread Francois Legrand
Hello, We run nautilus 14.2.8 ceph cluster. After a big crash in which we lost some disks we had a PG down (Erasure coding 3+2 pool) and trying to fix it we followed this https://medium.com/opsops/recovering-ceph-from-reduced-data-availability-3-pgs-inactive-3-pgs-incomplete-b97cbcb4b5a1 As the

[ceph-users] Re: Cluster network and public network

2020-05-14 Thread Amudhan P
Will EC based write benefit from Public network and Cluster network? On Thu, May 14, 2020 at 1:39 PM lin yunfan wrote: > That is correct.I didn't explain it clearly. I said that is because in > some write only scenario the public network and cluster network will > all be saturated the same tim

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread luuvuong91
HI ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Migrating clusters (and versions)

2020-05-14 Thread Kees Meijs
Thanks all, I'm going to investigate rbd-mirror further. K. On 14-05-2020 09:30, Anthony D'Atri wrote: > It’s entirely possible — and documented — to mirror individual images. Your > proposal to use snapshots is reinventing the wheel, but with less efficiency. > > https://docs.ceph.com/docs/nau

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread Zhenshi Zhou
The doc says "This subcommand will fail if any connected daemon or client is not compatible with the features offered by the given ". The command could be done if the client is disconnected, I guess. 于2020年5月14日周四 下午4:50写道: > HI > ___ > ceph-users mail

[ceph-users] Re: OSDs taking too much memory, for pglog

2020-05-14 Thread Wout van Heeswijk
Hi Harald, Your cluster has a lot of objects per osd/pg and the pg logs will grow fast and large because of this. The pg_logs will keep growing as long as you're clusters pgs are not active+clean. This means you are now in a loop where you cannot get stable running OSDs because the pg_logs tak

[ceph-users] Re: Cluster network and public network

2020-05-14 Thread Janne Johansson
Den tors 14 maj 2020 kl 10:46 skrev Amudhan P : > Will EC based write benefit from Public network and Cluster network? > I guess this depends on what parameters you use. All in all I think using one network is probably better, and the cases where I have seen missing heartbeats, it's not the netw

[ceph-users] why don't ceph daemon output their log to /var/log/ceph

2020-05-14 Thread 展荣臻(信泰)
Hi.all, when ceph run in container why not let ceph daemon output their log to /var/log/ceph, I build ceph image with ceph-container and deploy ceph via ceph-ansible. I found no logs under /var/log/ceph.why don't ceph daemon output logs to /var/log/ceph? _

[ceph-users] Re: all VMs in compute node openstack connecting to this ceph cluster error connect after run command ceph osd set-require-min-compat-client luminus

2020-05-14 Thread Zhenshi Zhou
I test this command on a mimic cluster. I can set the args to luminous and back to jewel as well. I think the root cause you cannot set it back, is some client may set a flag which conflict with jewe when you set that option to luminous. So you are not permitted to set it to jewel again. I think yo

[ceph-users] Re: iscsi issues with ceph (Nautilus) + tcmu-runner

2020-05-14 Thread Phil Regnauld
Mike Christie (mchristi) writes: > > I've never seen this kernel crash before. It might be helpful to send > more of the log before the kernel warning below. These are the messages leading up to the warning (pretty much the same, with the occasional notice about an ongoing deep s

[ceph-users] Re: Memory usage of OSD

2020-05-14 Thread Igor Fedotov
Rafal, just to mention - stupid allocator is known to cause high memory usage in certain scenarios but it uses bluestore_alloc mempool. Thanks, Igor On 5/13/2020 6:52 PM, Rafał Wądołowski wrote: Mark, Unfortunetly I closed terminal with mempool. But there was a lot of bytes used by blues

[ceph-users] ceph orch ps => osd (Octopus 15.2.1)

2020-05-14 Thread Ml Ml
Hello, any idea what´s wrong with my osd.34+35? root@ceph01:~# ceph orch ps NAMEHOSTSTATUS REFRESHED AGE VERSIONIMAGE NAME IMAGE ID CONTAINER ID (...) osd.34 ceph04 running- - osd.35 ceph04 running

[ceph-users] Re: RGW and the orphans

2020-05-14 Thread EDH - Manuel Rios
Hi Eric Any update about that? Cluster status are critical and there's no a simple tool or cli provided in current releases that help to maintain our S3 Clusters Healthy. Right now with the multipart/sharding bugs looks like a bunch of scrap. Regards Manuel -Mensaje original- De: E

[ceph-users] Bucket - radosgw-admin reshard process

2020-05-14 Thread CUZA Frédéric
Hi everyone, I am facing an issue with bucket resharding. It started with a warning on my ceph cluster health : [root@ceph_monitor01 ~]# ceph -s cluster: id: 2da0734-2521-1p7r-8b4c-4a265219e807 health: HEALTH_WARN 1 large omap objects Finds out I had a problem with a buc

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-05-14 Thread Jason Dillaman
On Thu, May 14, 2020 at 3:12 AM Brad Hubbard wrote: > On Wed, May 13, 2020 at 6:00 PM Lomayani S. Laizer > wrote: > > > > Hello, > > > > Below is full debug log of 2 minutes before crash of virtual machine. > Download from below url > > > > https://storage.habari.co.tz/index.php/s/31eCwZbOoRTMpc

[ceph-users] Using rbd-mirror in existing pools

2020-05-14 Thread Kees Meijs | Nefos
Hi list, Thanks again for pointing me towards rbd-mirror! I've read documentation, old mailing list posts, blog posts and some additional guides. Seems like the tool to help me through my data migration. Given one-way synchronisation and image-based (so, not pool based) configuration, it's still

[ceph-users] Re: What is a pgmap?

2020-05-14 Thread Frank Schilder
Hi, I also observe an increase in pgmap version every second or so, see snippet below. I run mimic 13.2.8 without any PG scaling/upmapping. Why does the version increase so often? May 14 12:33:50 ceph-03 journal: cluster 2020-05-14 12:33:48.521546 mgr.ceph-02 mgr.27460080 192.168.32.66:0/63 114

[ceph-users] Re: What is a pgmap?

2020-05-14 Thread Frank Schilder
Unfortunately, my e-mail client does not collect threads properly. Think I got my answer. Form Janne Johansson: > Since using computer time and date is fraught with peril, having the whole > cluster just bump that single number every second (and writing it to the PG > on each write) would allow a

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Zhenshi Zhou
As a matter of my experience, rbd-mirror only copy the images with journaling feature of clusterA to clusterB. It doesn't influence the other images in the pool of clusterB. You'd better have a test on it. Kees Meijs | Nefos 于2020年5月14日周四 下午10:22写道: > Hi list, > > Thanks again for pointing me to

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Eugen Block
The pool names in both clusters have to be identical in addition to the required journal feature. It’s probably an advantage if the existing pool in the second cluster has a different name. In that case you can set up the mirror for a new pool without affecting the other pool and after mirr

[ceph-users] ceph-ansible replicated crush rule

2020-05-14 Thread Marc Boisis
Hello, With ceph-ansible the default replicated crush rule is : { "rule_id": 0, "rule_name": "replicated_rule", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -1,

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Anthony D'Atri
When you set up the rbd-mirror daemons with each others’ configs, and initiate mirroring of a volume, the destination will create the volume in the destination cluster and pull over data. Hopefully you’re creating unique volume names so there won’t be conflicts, but that said if the destinati

[ceph-users] Re: Ceph meltdown, need help

2020-05-14 Thread Frank Schilder
Dear Marc, thank you for your endurance. I had another slightly different "meltdown", this time throwing the MGRs out and I adjusted yet another beacon grace time. Fortunately, after your communication, I didn't need to look very long. To harden our cluster a bit further, I would like to adjust

[ceph-users] Re: Cluster network and public network

2020-05-14 Thread Anthony D'Atri
What is a saturated network with modern switched technologies? Links to individual hosts? Uplinks from TORS (public)? Switch backplane (cluster)? > That is correct.I didn't explain it clearly. I said that is because in > some write only scenario the public network and cluster network will >

[ceph-users] Re: ACL for user in another teant

2020-05-14 Thread Pritha Srivastava
Hi Vishwas, In the following bucket policy: Policy:{ "Version": "2012-10-17", "Statement": [ { "Principal": {"AWS": ["arn:aws:iam::tenant1:user/Tom"]}, "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": "s3://tenant2/jerry-bucket" } ] } 'Resource'

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Kees Meijs | Nefos
Hi Anthony, A one-way mirror suits fine in my case (the old cluster will be dismantled in mean time) so I guess a single rbd-mirror daemon should suffice. The pool consists of OpenStack Cinder volumes containing a UUID (i.e. volume-ca69183a-9601-11ea-8e82-63973ea94e82 and such). The change of con

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Jason Dillaman
On Thu, May 14, 2020 at 12:47 PM Kees Meijs | Nefos wrote: > Hi Anthony, > > A one-way mirror suits fine in my case (the old cluster will be > dismantled in mean time) so I guess a single rbd-mirror daemon should > suffice. > > The pool consists of OpenStack Cinder volumes containing a UUID (i.e.

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Kees Meijs | Nefos
Thanks for clearing that up, Jason. K. On 14-05-2020 20:11, Jason Dillaman wrote: > rbd-mirror can only remove images that (1) have mirroring enabled and > (2) are not split-brained with its peer. It's totally fine to only > mirror a subset of images within a pool and it's fine to only mirror > o

[ceph-users] Re: Using rbd-mirror in existing pools

2020-05-14 Thread Anthony D'Atri
Understandable concern. FWIW I’ve used rbd-mirror to move thousands of volumes between clusters with zero clobbers. —aad > On May 14, 2020, at 9:46 AM, Kees Meijs | Nefos wrote: > > My main concern is pulling images into a non-empty pool. It would be > (very) bad if rbd-mirror tries to be sm

[ceph-users] stale+active+clean PG

2020-05-14 Thread tomislav . raseta
Dear all, We're running Ceph Luminous and we've recently hit an issue with some OSD's (autoout states, IO/CPU overload) which unfortunately resulted with one placement group with the state "stale+active+clean", it's a placement group from .rgw.root pool: 1.15 0 0

[ceph-users] Re: ACL for user in another teant

2020-05-14 Thread Vishwas Bm
Hi Prita, Thanks for the response. Yes, with boto package I was able to access the bucket content. *Thanks & Regards,* *Vishwas * On Thu, May 14, 2020 at 9:32 PM Pritha Srivastava wrote: > Hi Vishwas, > > In the following bucket policy: > Policy:{ > "Version": "2012-10-17", > "State