[ceph-users] Get instant help for Cash App Refund issues.

2020-08-10 Thread mary smith
Do you Cash App Refund on your cash app balance or bank account? If yes then connect with the technical team as they ensure quick tech support services, You’ll be guided with precise solutions for the refund process. Just call the technical experts and seek their assistance on this. You can call

[ceph-users] Fix Cash App transfer error? Contact Cash.app/help for help.

2020-08-10 Thread mary smith
One of the app’s primary function is transfer and making payments. But if there’s a tech error that’s not encouraging you to do that, then you can watch some tech videos on Youtube for assistance or you can call the Cash.app/help number for the assistance and relay your problem to the customer c

[ceph-users] pg stuck in unknown state

2020-08-10 Thread Michael Thomas
On my relatively new Octopus cluster, I have one PG that has been perpetually stuck in the 'unknown' state. It appears to belong to the device_health_metrics pool, which was created automatically by the mgr daemon(?). The OSDs that the PG maps to are all online and serving other PGs. But wh

[ceph-users] ceph orch host rm seems to just move daemons out of cephadm, not remove them

2020-08-10 Thread pixel fairy
made a cluster of 2 osd hosts, and one temp monitor. then added another osd host and did a "ceph orch host rm tempmon". this all in vagrant (libvirt), with the generic/ubuntu2004 box. INFO:cephadm:Inferring fsid 5426a59e-db33-11ea-8441-b913b695959d INFO:cephadm:Using recent ceph image ceph/ceph:v1

[ceph-users] Deleterious effects OSD queue

2020-08-10 Thread João Victor Mafra
Hi folks, Regarding OSD operation queues (shards), I found this in the documentation (https://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/): "A lower number of shards will increase the impact of the mClock queues, but may have other deleterious effects". What are these delete

[ceph-users] Re: EntityAddress format in ceph ssd blacklist commands

2020-08-10 Thread Void Star Nill
Thanks Ilya for the clarification. -Shridhar On Mon, 10 Aug 2020 at 10:00, Ilya Dryomov wrote: > On Mon, Aug 10, 2020 at 6:14 PM Void Star Nill > wrote: > > > > Thanks Ilya. > > > > I assume :0/0 indicates all clients on a given host? > > No, a blacklist entry always affects a single client in

[ceph-users] Re: EntityAddress format in ceph ssd blacklist commands

2020-08-10 Thread Ilya Dryomov
On Mon, Aug 10, 2020 at 6:14 PM Void Star Nill wrote: > > Thanks Ilya. > > I assume :0/0 indicates all clients on a given host? No, a blacklist entry always affects a single client instance. For clients (as opposed to daemons, e.g. OSDs), the port is 0. 0 is a valid nonce. Thanks,

[ceph-users] Re: EntityAddress format in ceph ssd blacklist commands

2020-08-10 Thread Void Star Nill
Thanks Ilya. I assume *:0/0* indicates all clients on a given host? Thanks, Shridhar On Mon, 10 Aug 2020 at 03:07, Ilya Dryomov wrote: > On Fri, Aug 7, 2020 at 10:25 PM Void Star Nill > wrote: > > > > Hi, > > > > I want to understand the format for `ceph osd blacklist` > > commands. The docu

[ceph-users] Re: RGW unable to delete a bucket

2020-08-10 Thread Andrei Mikhailovsky
Hi Gents, thanks for your replies. I have stopped the removal command and reinitiated it again. To my surprise it has removed the bucket pretty quickly this time around. Not too sure if it has removed all the objects or orphans. I will need to further investigate this matter. Andrei - Ori

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-10 Thread Mark Nelson
Yeah, I know various folks have adopted those settings, though I'm not convinced they are better than our defaults.  Basically you have more smaller buffers and start compacting sooner and theoretically should have a more gradual throttle along with a bunch of changes to compaction, but every t

[ceph-users] Re: Remapped PGs

2020-08-10 Thread David Orman
We've gotten a bit further, after evaluating how this remapped count was determine (pg_temp), we've found the PGs counted as being remapped: root@ceph01:~# ceph osd dump |grep pg_temp pg_temp 3.7af [93,1,29] pg_temp 3.7bc [137,97,5] pg_temp 3.7d9 [72,120,18] pg_temp 3.7e8 [80,21,71] pg_temp 3.7fd

[ceph-users] Re: ceph rbd iscsi gwcli Non-existent images

2020-08-10 Thread Jason Dillaman
On Mon, Aug 10, 2020 at 9:23 AM Steven Vacaroaia wrote: > > Thanks but that is why I am puzzled - the image is there Can you enable debug logging for the iscsi-gateway-api (add "debug=true" in the config file), restart the daemons, and retry? > rbd -p rbd info vmware01 > rbd image 'vmware01':

[ceph-users] Re: ceph rbd iscsi gwcli Non-existent images

2020-08-10 Thread Steven Vacaroaia
Thanks but that is why I am puzzled - the image is there rbd -p rbd info vmware01 rbd image 'vmware01': size 6 TiB in 1572864 objects order 22 (4 MiB objects) id: 16d3f6b8b4567 block_name_prefix: rbd_data.16d3f6b8b4567 format: 2 features: layering,

[ceph-users] Re: ceph rbd iscsi gwcli Non-existent images

2020-08-10 Thread Jason Dillaman
On Fri, Aug 7, 2020 at 2:37 PM Steven Vacaroaia wrote: > > Hi, > I would appreciate any help/hints to solve this issue >iscis (gwcli) cannot see the images anymore > > This configuration worked fine for many months > What changed was that ceph is "nearly full" > > I am in the process of cleani

[ceph-users] RGW 14.2.10 Regresion? ordered bucket listing requires read #1

2020-08-10 Thread EDH - Manuel Rios
Hi, We got our cluster updated to the lastest versión 14.2.10 Checking rgw logs after 14.2.10 upgrade 2020-08-10 10:21:49.186 7f74cd7db700 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #1 2020-08-10 10:21:49.188 7f75eca19700 1 RGWRados::Bucket::Lis

[ceph-users] Re: EntityAddress format in ceph ssd blacklist commands

2020-08-10 Thread Ilya Dryomov
On Fri, Aug 7, 2020 at 10:25 PM Void Star Nill wrote: > > Hi, > > I want to understand the format for `ceph osd blacklist` > commands. The documentation just says it's the address. But I am not sure > if it can just be the host IP address or anything else. What does *:0/* > *3710147553* represent

[ceph-users] DocuBetter Meeting this week -- 12 Aug 2020 0830 PDT

2020-08-10 Thread John Zachary Dover
There is a general documentation meeting called the "DocuBetter Meeting", and it is held every two weeks. The next DocuBetter Meeting will be on 12 Aug 2020 at 0830 PDT, and will run for thirty minutes. Everyone with a documentation-related request or complaint is invited. The meeting will be held

[ceph-users] What The Benefit Of Choosing Epson Customer Service?

2020-08-10 Thread mary smith
In order get benefitted by choosing the correct and dependable Epson Customer Service service providers, it is strongly recommended to have a healthy research before getting involved with any of such service providers as there are several fake companies are also there claiming to be the best but

[ceph-users] Get To Be Acquainted With HP Support Assistant In A Proper Manner.

2020-08-10 Thread mary smith
In order to be acquainted with the HP Support Assistant in a proper manner, you should not leave any stone unturned in approaching the certified printer experts who will professionally assist you the exact troubleshooting solution to your problems in an effective manner. You can approach them by

[ceph-users] Re: OSDs flapping since upgrade to 14.2.10

2020-08-10 Thread Ingo Reimann
Hi Mark, i raised the osd_memory_target from 4G to 6G and set bluefs_buffered_io back to false. 10 Minutes later, i got the first 'Monitor daemon marked osd.X down, but it is still running', after additional 5, the second event. I tried to raise the memory_target to 10G, but this didn`t help, s

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-10 Thread Manuel Lausch
Hi Mark, rocskdb compactions was one of my first ideas as well. But they don't correlate. I checkt this with the ceph_rocskdb_log_parser.py from https://github.com/ceph/cbt.git I saw only a few compactions on the whole cluster. It didn't seem to be the problem, although the compactions sometimes