[ceph-users] How does the Cash app refund go into account immediately?

2020-08-07 Thread david william
On the other hand, Square revived this application to keep cash into the Cash app balance, until you move it genuinely into the monetary equalization. To move cash into your budgetary parity which you get for any portion or Cash app refund, you need to press the Cash Out catch so to speak. http

[ceph-users] Mix-up while sending money through Cash App? Talk to a Cash App representative.

2020-08-07 Thread david william
If you've fail to send money on account of some tech issues, by then you can use the help from the specialized help districts or you can watch help chronicles on Youtube. Despite that, you can call the assistance gathering and select to talk to a Cash App representative for getting the issue set

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-07 Thread Manuel Lausch
I cannot confirm that more memory target will solve the problem completly. In my case the OSDs have 14GB memory target and I did have huge user IO impact while snaptrim (many slow ops the whole time). Since I set bluefs_bufferd_io=true it seems to work without issue. In my cluster I don't use rgw.

[ceph-users] A high-class Hyderabad escort, who teams up with another escort for your ultimate fun

2020-08-07 Thread sehgalshreya553
Hi guys, thanks for visiting my profile page. I am Sherya a young high-class Hyderabad escort, whose ultimate objective is to offer the desired fun and entertainment. You can pick up two Hyderabad escorts from Hyderabad Escorts Service and believe me, when you visit this website, the options are

[ceph-users] Re: Can you block gmail.com or so!!!

2020-08-07 Thread Chris Palmer
While you are thinking about the mailing list configuration, can you consider that it is very DMARC-unfriendly, which is why I have to use an email address from an ISP domain that does not publish DMARC. If I post from my normal email accounts: * We publish SPF, DKIM & DMARC policies that req

[ceph-users] Are Yahoo Mail Down deleted emails forever?

2020-08-07 Thread mary smith
If you delete a message from Inbox, then it moves into the Trash folder, but if you delete a message from the Trash folder then emails deleted forever from your Yahoo mail account. But, there is a backup created, from here you can restore it from any 60 days. If Yahoo Mail Down, then it only sto

[ceph-users] Approach Epson Customer Service If Unable To Print Any Document.

2020-08-07 Thread mary smith
Are you facing any kind of problems and hurdles during the course of epson printer? Consider taking help directly from the Epson Customer Service who will help you to resolve all your problems within a few seconds. So, you need instant technical help from the professionals who will surely assist

[ceph-users] Re: Can you block gmail.com or so!!!

2020-08-07 Thread Alexander Herr
Thanks Chris for your details. Notice, though, that all/most mailing lists, especially all using mailman, do have this issue, which is caused by the DMARC standard neglecting to mind how most mailing lists work(ed at the time). That is, prepending the original subject's line with the list's name

[ceph-users] How i can use bucket policy with subuser

2020-08-07 Thread hoannv46
Hi all. I have a cluster 14.2.10 versions. First, I create user hoannv radosgw-admin user create --uid=hoannv --display-name=hoannv Then create subuser hoannv:subuser1 by command : radosgw-admin subuser create --uid=hoannv --subuser=subuser1 --key-type=swift --gen-secret --access=full hoann

[ceph-users] Get an Error Free Printing with the Help of HP Support

2020-08-07 Thread pa6724453
Printing of a document or a picture is not a big task but sometimes errors make this one very difficult. A minor problem may stop the whole printing task. If you are using HP printer then support.hp.com is the best option for you to get rid of all printing problems. You will get complete solutio

[ceph-users] OSDs flapping since upgrade to 14.2.10

2020-08-07 Thread Ingo Reimann
Hi list, since our upgrade 14.2.9 -> 14.2.10 we observe flapping OSDs: * The mons claim every few minutes: 2020-08-07 09:49:09.783648 osd.243 (osd.243) 246 : cluster [WRN] Monitor daemon marked osd.243 down, but it is still running 2020-08-07 10:04:40.753704 osd.243 (osd.243) 248 : cluster [WRN]

[ceph-users] RGW Garbage Collection (GC) does not make progress

2020-08-07 Thread Wido den Hollander
Hi, On a Nautilus 14.2.8 cluster I'm seeing a large amount of GC data and the GC on the RGW does not seem to make progress. The .rgw.gc pool contains 39GB of data spread out over 32 objects. In the logs we do see references of the RGW GC doing work and it says it is removing objects. Those

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-07 Thread Manuel Lausch
Sure. totalusedfree shared buff/cache available Mem: 394582604 355494400 52829321784 3380527229187220 Swap: 1047548 1047548 0 On the node are 24 14TB OSDs with 14G configured memory target. Manuel On Fri, 7 Aug

[ceph-users] Re: OSDs flapping since upgrade to 14.2.10

2020-08-07 Thread EDH - Manuel Rios
Hi, Maybe this help, You can increase the osd_op_tp thread in ceph conf to something similar to: [osd] osd_op_thread_suicide_timeout = 900 osd_op_thread_timeout = 300 osd_recovery_thread_timeout = 300 Regards -Mensaje original- De: Ingo Reimann Enviado el:

[ceph-users] Re: OSDs flapping since upgrade to 14.2.10

2020-08-07 Thread Ingo Reimann
Hi Stefan, Hi Manuel, thanks for your quick advices. In fact, since i set "ceph config set osd bluefs_buffered_io true", the problems disappeared. We have lots of RAM in our osd hosts, so buffering is ok. I`ll trak this issue down further after the weekend! best regards, Ingo - Ursprüngli

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-07 Thread Mark Nelson
It's quite possible that the issue is really about rocksdb living on top of bluefs with bluefs_buffered_io and rgw causing a ton of OMAP traffic.  rgw is the only case so far where the issue has shown up, but it was significant enough that we didn't feel like we could leave bluefs_buffered_io e

[ceph-users] Re: OSDs flapping since upgrade to 14.2.10

2020-08-07 Thread Mark Nelson
Hi Ingo, If you are able and have lots of available memory, could you also try setting it to false but increasing the osd_memory_target size?  I'd like to understand a little bit deeper what's going on here.  Ultimately I don't want our only line of defense against slow snap trimming to be h

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-07 Thread Stefan Kooman
On 2020-08-07 09:27, Manuel Lausch wrote: > I cannot confirm that more memory target will solve the problem > completly. In my case the OSDs have 14GB memory target and I did have > huge user IO impact while snaptrim (many slow ops the whole time). Since > I set bluefs_bufferd_io=true it seems to w

[ceph-users] Re: OSDs flapping since upgrade to 14.2.10

2020-08-07 Thread Ingo Reimann
Hi Mark, i`ll check that after the weekend! Ingo - Ursprüngliche Mail - Von: "Mark Nelson" An: "ceph-users" Gesendet: Freitag, 7. August 2020 15:15:08 Betreff: [ceph-users] Re: OSDs flapping since upgrade to 14.2.10 Hi Ingo, If you are able and have lots of available memory, could y

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-07 Thread Mark Nelson
Thinking about this a little more, one thing that I remember when I was writing the priority cache manager is that in some cases I saw strange behavior with the rocksdb block cache when compaction was performed.  It appeared that the entire contents of the cache could be invalidated.  I guess t

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-07 Thread Manuel Lausch
Hi Mark, The read IOPs in "normal" operation was with bluefs_buffered_io=false somewhat about 1. And now with true around 2. So this seems slightly higher, but far away from any problem. While snapshot trimming the difference is enormous. with false: around 200 with true: around 10 scrubing read

[ceph-users] Re: OSDs flapping since upgrade to 14.2.10

2020-08-07 Thread Stefan Kooman
On 2020-08-07 12:07, Ingo Reimann wrote: > i`m am not sure, if we really have a problem, but it does not look healthy. It might be related to the change that is mentioned in another thread: "block.db/block.wal device performance dropped after upgrade to 14.2.10" TL;DR: bluefs_buffered_io has bee

[ceph-users] Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

2020-08-07 Thread Mark Nelson
That is super interesting regarding scrubbing.  I would have expected that to be affected as well.  Any  chance you can check and see if there is any correlation between rocksdb compaction events, snap trimming, and increased disk reads?  Also (Sorry if you already answered this) do we know for

[ceph-users] Re: Nautilus slow using "ceph tell osd.* bench"

2020-08-07 Thread Jim Forde
I have set it to 0.0 and let it re-balance. Then I set it back and let it re-balance again. I have a fairly small cluster, and while in production, it is not getting much use because of pandemic. So a good time to do some of these things. Because of that I have been re-balancing the osd's in grou

[ceph-users] How to Get Refund for Cash App Payment Pending

2020-08-07 Thread itisghana
Welcome to Cash App Support Center +1-888-526-0829 Welcome to Cash App Support Center +1-888-526-0829 Welcome to Cash App Support Center +1-888-526-0829 https://cashapphelp.support/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] ceph rbd iscsi gwcli Non-existent images

2020-08-07 Thread Steven Vacaroaia
Hi, I would appreciate any help/hints to solve this issue iscis (gwcli) cannot see the images anymore This configuration worked fine for many months What changed was that ceph is "nearly full" I am in the process of cleaning it up ( by deleting objects from one of the pools) and I do see reads

[ceph-users] EntityAddress format in ceph ssd blacklist commands

2020-08-07 Thread Void Star Nill
Hi, I want to understand the format for `ceph osd blacklist` commands. The documentation just says it's the address. But I am not sure if it can just be the host IP address or anything else. What does *:0/* *3710147553* represent in the following output? $ ceph osd blacklist ls listed 1 entries1