Hi Sharad,
To add the first gateway you need to execute `gwcli` on `ceph-gw-1` host, and
guarantee that you use the host FQDN on `create` command (in this case
'ceph-gw-1').
You can check your host FQDN by running the following script:
python -c 'import socket; print(socket.getfqdn())'
__
On 2020-04-01 19:00, Stefan Kooman wrote:
>
> That said there are plenty of metrics not available outside of
> prometheus plugin. I would recommend pushing ceph related metrics from
> the built-in ceph plugin in the telegraf client as well. The PR to add
> support for MDS and RGW to Ceph plugin h
I can just add 4Kn drives to my existing setup not? Since this
technology is only specific to how the osd daemon is talking to the
disk?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Yes, no problem
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTu
The touch Id feature of the application is one of the most colossal features as
it gives you the adaptability to embrace a trade. If you can't, by then you can
use the help and help that is found in the customer brain and pick to talk to a
Cash App representative or you can research to the parti
Affirmation is a key bit of the application. Before long, if you can't confirm
a trade, by then you can use a few plans from the customer care by choosing to
talk to a Cash App representative and get the tech issue settled. You can in
like manner don't extra a second to inspect to the help
site
One of the features of the application to help you with filling the nuances of
the recipient is the scanner. Notwithstanding, if you can't use it as a result
of some goof, by then you can get the fundamental assistance from the customer
care site by picking to talk to a Cash App representative.
The touch Id feature of the app lets you approve the transaction by recognizing
your fingerprints. But if the Id isn’t working, then you can use the assistance
that is provided by various tech support sites or you can dial the tech support
number to talk to a Cash App representative in order to
The activity tab of the app is one of the features that help you with the
refund issues. But if you can’t use the app’s feature, then you can get in
touch with the tech support and talk to a Cash App representative or you can
proceed to watch some tech videos and get troubleshooting solutions to
To perform any operation you first need to get inside the app and that can only
be done by tapping on the icon. But if the icon is unresponsive, then you can
get assistance by tech support sites or you can also try rebooting your device.
You can talk to a Cash App representative to get the issue
The tabs are the features in the app that assist you in performing variety of
tasks. But if you can’t use the tabs, then you must get in touch with the
customer care and talk to a Cash App representative to get the issue resolved.
In addition to that, you can also use tech support to get the mat
Can you block gmail.com or so!!!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Please not a simple gmail block 8)
Not everyone wants to use their corporate account or selfhost email or
use a marginally better/worse commercial gmail alternative
On 8/6/20 12:52 PM, Marc Roos wrote:
Can you block gmail.com or so!!!
___
ceph
Hi,
I found the reasen of this behavior change.
With 14.2.10 the default value of "bluefs_buffered_io" was changed from
true to false.
https://tracker.ceph.com/issues/44818
configureing this to true my problems seems to be solved.
Regards
Manuel
On Wed, 5 Aug 2020 13:30:45 +0200
Manuel Lausch
Hi,
I think I found the reason why snaptriming causes slowops in my usecase.
With 14.2.10 the default value of "bluefs_buffered_io" was changed from
true to false.
https://tracker.ceph.com/issues/44818
configureing this to true again my problems seems to be solved.
Regards
Manuel
On Mon, 3 Au
Hi,
I echo Jim's findings. Going from lower Nautilus versions up to .10 on 4
installations suddenly gave a huge drop in read performance, probably by
about 3/4, so that my users were complaining that VMs were taking ages
to boot up. Strangely, write performance was not affected so much.
Ki
On 6/08/2020 8:52 pm, Marc Roos wrote:
Can you block gmail.com or so!!!
! Gmail account here :(
Can't we just restrict the list to emails from members?
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to cep
or just block that user
On Thu, Aug 6, 2020 at 2:06 PM Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 6/08/2020 8:52 pm, Marc Roos wrote:
> > Can you block gmail.com or so!!!
>
> ! Gmail account here :(
>
>
> Can't we just restrict the list to emails from members?
>
> --
> Lindsay
>
Hi,
I’m pretty sure that the deep-scrubs are causing the slow requests.
There have been several threads about it on this list [3], there are
two major things you can do:
1. Change default settings for deep-scrubs [1] to run them outside
business hours to avoid additional load.
2: Change
Hi,
Any idea what is going on if the "bluestore_cache_autotune" true,
"bluestore_cache_size" 0 but the server has only nvme?
Because if the bluestore_cache_size 0 it means it will pick ssd or hdd, but if
no ssd and hdd, then what will be going on if autotune true?
How I should size this number?
Maneul, thank you for your input.
This is actually huge, and the problem is exactly that.
On a side note I will add, that I observed lower memory utilisation on OSD
nodes since the update, and a big throughput on block.db devices(up to
100+MB/s) that was not there before, so logically that meant t
Hi,
I'm running FIO benchmark to test my simple cluster (3 OSD's, 128 pg's - using
Nautilus - v14.2.10) and after certain load of clients performing random read
operations, the OSDs show very different performances in terms of op latency.
In extreme cases there is an OSD that performs much wor
Hi Eric,
yes, I had network restarts as well along the way. However, these should also
not lead to the redundancy degradation I observed, it doesn't really explain
why ceph lost track of so many objects. A temporary network outage on a server
is an event that the cluster ought to survive withou
BUMP...
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "ceph-users"
> Sent: Tuesday, 4 August, 2020 17:16:28
> Subject: [ceph-users] RGW unable to delete a bucket
> Hi
>
> I am trying to delete a bucket using the following command:
>
> # radosgw-admin bucket rm --bucket= --
Yeah, there are cases where enabling it will improve performance as
rocksdb can then used the page cache as a (potentially large) secondary
cache beyond the block cache and avoid hitting the underlying devices
for reads. Do you have a lot of spare memory for page cache on your OSD
nodes? You m
I created https://tracker.ceph.com/issues/46847
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 06 August 2020 13:17:20
To: Eric Smith; ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD re
Hi all,
As previously mentioned, blocking the gmail domain isn't a feasible
solution since the vast majority of @gmail.com subscribers (about 500 in
total) are likely legitimate Ceph users.
A mailing list member recommended some additional SPF checking a couple
weeks ago which I just implemented
Thanks for you hard work David!
Mark
On 8/6/20 1:09 PM, David Galloway wrote:
Hi all,
As previously mentioned, blocking the gmail domain isn't a feasible
solution since the vast majority of @gmail.com subscribers (about 500 in
total) are likely legitimate Ceph users.
A mailing list member r
I looked at the received-from headers, and it looks to me like these
messages are being fed into the list from the web interface. The first
received from is from mailman web and a private IP.
On 8/6/20 2:09 PM, David Galloway wrote:
> Hi all,
>
> As previously mentioned, blocking the gmail domain
In my case I only have 16GB RAM per node with 5 OSD on each of them, so I
actually have to tune osd_memory_target=2147483648 because with the default
value of 4GB my osd processes tend to get killed by OOM.
That is what I was looking into before the correct solution. I
disabled osd_memory_target li
Oh, interesting. You appear to be correct. I'm running each of the
mailing lists' services in their own containers so the private IP makes
sense.
I just commented on a FR for Hyperkitty to disable posting via Web UI:
https://gitlab.com/mailman/hyperkitty/-/issues/264
Aside from that, I can conf
Still haven't figured this out. We went ahead and upgraded the entire
cluster to Podman 2.0.4 and in the process did OS/Kernel upgrades and
rebooted every node, one at a time. We've still got 5 PGs stuck in
'remapped' state, according to 'ceph -s' but 0 in the pg dump output in
that state. Does any
a 2GB memory target will absolutely starve the OSDs of memory for
rocksdb block cache which probably explains why you are hitting the disk
for reads and a shared page cache is helping so much. It's definitely
more memory efficient to have a page cache scheme rather than having
more cache for ea
Hi Folks,
I don't know of a downstream issue that looks like this, and we've
upstreamed every fix for bucket listing and cleanup we have. We are
pursuing a space leak believed to arise in "radosgw-admin bucket rm
--purge-objects" but not a non-terminating listing.
The only upstream release not p
No please :-( ! I'm a Ceph user with a gmail account.
On Thursday, August 6, 2020, David Galloway wrote:
> Oh, interesting. You appear to be correct. I'm running each of the
> mailing lists' services in their own containers so the private IP makes
> sense.
>
> I just commented on a FR for Hyper
You'r not the only one affected by this issue
As far as i know several huge companies hitted this bug too, but private
patches or tools are not public released.
This is caused for the a resharding process during upload in previous versions.
Workarround for us.:
- Delete objects of the bucket a
I have a cluster running Nautilus where the bucket instance (backups.190) has
gone missing:
# radosgw-admin metadata list bucket | grep 'backups.19[0-1]' | sort
"backups.190",
"backups.191",
# radosgw-admin metadata list bucket.instance | grep 'backups.19[0-1]' | sort
"backups.191:00
SOLUTION FOUND!
Reweight the osd to 0, then set it back to where it belongs.
ceph osd crush reweight osd.0 0.0
Original
ceph tell osd.0 bench -f plain
bench: wrote 1 GiB in blocks of 4 MiB in 4.03434 sec at 254 MiB/sec 63 IOPS
After reweight of osd.0
ceph tell osd.0 bench -f plain
bench: wrote 1
Hi Jim, when you do reweighting, balancing will be triggered, how did you set
it back? Setting back immediately or waiting for balancing to complete? I did
try both on my cluster and couldn't see osd bench changed significantly like
yours (actually no changes), however, my cluster is 12.2.12, no
39 matches
Mail list logo