Could you clarify this part of the Quincy release notes too please?
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/XMOSB4IFKVHSQT5MFSWXTQHY7FC5WDSQ/
On 20/04/2022 17:31, Stefan Kooman wrote:
On 4/20/22 18:26, Patrick Donnelly wrote:
On Wed, Apr 20, 2022 at 7:22 AM Stefan Kooma
These are probably remainders of previous OSDs, I remember having to
clean up orphaned units from time to time. Compare the UUIDs to your
actual OSDs and disable the units of the non-existing OSDs.
Zitat von Marc :
I added some osd's which are up and running with:
ceph-volume lvm create --
Hi,
there are a bunch of dashboard settings, for example
pacific:~ # ceph dashboard set-grafana-api-url
pacific:~ # ceph dashboard set-prometheus-api-host
pacific:~ # ceph dashboard set-alertmanager-api-host
and many more.
Zitat von cephl...@drop.grin.hu:
Hello,
I have tried to find the sol
On Wed, Apr 20, 2022 at 07:05:37PM +, Ryan Taylor wrote:
>
> Hi Luís,
>
> The same cephx key is used for both mounts. It is a regular rw key which
> does not have permission to set any ceph xattrs (that was done
> separately with a different key). But it can read ceph xattrs and set
> user x
hallo everybody,
i want to split my OSDs on 2 NVMEs (250G) and 1 SSD(900G) for
bluestorage . I used the following configuration
```
service_type: osd
service_id: osd_spec_a
placement:
host_pattern: "*"
spec:
data_devices:
paths:
- /dev/sdc
- /dev/sdd
- /dev/sde
On Wed, Apr 20, 2022 at 8:29 AM Chris Palmer wrote:
>
> The Quincy release notes state that "MDS upgrades no longer require all
> standby MDS daemons to be stoped before upgrading a file systems's sole
> active MDS." but the "Upgrading non-cephadm clusters" instructions still
> include reducing ra
Hi Patrick
Sorry, I misread it. Now it makes perfect sense. Sorry for the noise.
Regards, Chris
On 21/04/2022 14:28, Patrick Donnelly wrote:
On Wed, Apr 20, 2022 at 8:29 AM Chris Palmer wrote:
The Quincy release notes state that "MDS upgrades no longer require all
standby MDS daemons to be sto
Hello,
I have on issue on my ceph cluster (octopus 15.2.16) with several buckets
raising a LARGE_OMAP_OBJECTS warning.
I found the buckets in the resharding list but ceph fails to reshard them.
The root cause seems to be on "bi list". When I run the following command on an
impacted bucket, I ge
Is this a versioned bucket?
On Thu, Apr 21, 2022 at 9:51 AM Guillaume Nobiron
wrote:
> Hello,
>
> I have on issue on my ceph cluster (octopus 15.2.16) with several buckets
> raising a LARGE_OMAP_OBJECTS warning.
> I found the buckets in the resharding list but ceph fails to reshard them.
>
> The
Hello,
I was trying to add iscsi-gateway to dashboard, but did it with wrong
configuration format, and now most of dashboard throwing 500 internal
error. This is the exception if try to remove or list "iscis-gateways"
ceph01-dev:/etc/iscsi# ceph dashboard iscsi-gateway-list
Error EINVAL: Tracebac
Yes, all the buckets in the reshard list are versioned (like most of our
buckets by the way).
Cegid est susceptible d'effectuer un traitement sur vos données personnelles à
des fins de gestion de notre relation commerciale. Pour plus d'information,
consultez https://www.cegid.com/fr/privacy-po
If the cluster is managed by cephadm you should be able to just do a "ceph
orch upgrade start --image quay.io/ceph/ceph:v16.2.7". We test upgrades
from 15.2.0 to pacific and quincy so I think going from 15.2.5 to 16.2.7
directly should work.
___
ceph-user
Wanted to add on that it looks like from the ceph versions output there is
only 1 mgr daemon. The cephadm upgrade requires there to be at least 2 so
you will need to add another mgr daemon first.
On Thu, Apr 21, 2022 at 10:43 AM Ml Ml wrote:
> Hello,
>
> i am running a 7 Node Cluster with 56 OSD
https://tracker.ceph.com/issues/51429 with
https://github.com/ceph/ceph/pull/45088 for Octopus.
We're also working on: https://tracker.ceph.com/issues/55324 which is
somewhat related in a sense.
On Thu, Apr 21, 2022 at 11:19 AM Guillaume Nobiron
wrote:
> Yes, all the buckets in the reshard list
Hi Luís,
dmesg looks normal I think:
[ 265.269450] Key type ceph registered
[ 265.270914] libceph: loaded (mon/osd proto 15/24)
[ 265.303764] FS-Cache: Netfs 'ceph' registered for caching
[ 265.305460] ceph: loaded (mds proto 32)
[ 265.513616] libceph: mon0 (1)10.30.201.3:6789 session est
On Thu, Apr 21, 2022 at 07:28:19PM +, Ryan Taylor wrote:
>
> Hi Luís,
>
> dmesg looks normal I think:
Yep, I don't see anything suspicious either.
>
> [ 265.269450] Key type ceph registered
> [ 265.270914] libceph: loaded (mon/osd proto 15/24)
> [ 265.303764] FS-Cache: Netfs 'ceph' reg
Hi Luís,
I did just that:
[fedora@cephtest ~]$ sudo ./debug.sh
Filesystem
Size Used Avail Use% Mounted on
10.30.201.3:6789,10.30.202.3:6789,10.30.203.3:6789:/volumes/_nogroup/55e46a89-
Yeah, I've seen this happen when replacing osds. Like Eugen said,
there's some services that get created for mounting the volumes.
You can disable them like this:
systemctl disable ceph-volume@lvm-{osdid}-{fsid}.service
list the contents of
/etc/systemd/system/multi-user.target.wants/ceph-volume@l
Hi,
I want to copy an image, which is not being used, to another cluster.
rbd-mirror would do it, but rbd-mirror is designed to handle image
which is being used/updated, to ensure the mirrored image is always
consistent with the source. I wonder if there is any easier way to copy
an image without
Hi Tony,
Have a look at rbd export and rbd import, they dump the image to a file or
stdout. You can pipe the rbd export directly into an rbd import assuming you
have a host which has access to both ceph clusters.
Hope this helps!
Mart
From mobile
> On Apr 22, 2022, at 11:42, Tony Liu wrote:
As someone noted, rbd export / import work. I’ve also used rbd-mirror for
capacity management, it works well for moving attached as well as unattached
images. When using rbd-mirror to move 1-2 images at a time, adjustments to
default parameters speeds progress substantially. It’s easy to see
Thank you Mart! Pipe is indeed easier.
I found this blog. Will give it a try.
https://machinenix.com/ceph/how-to-export-a-ceph-rbd-image-from-one-cluster-to-another-without-using-a-bridge-server
Tony
From: Mart van Santen
Sent: April 21, 2022 08:52 PM
To:
Thank you Anthony! I agree that rbd-mirror is more reliable and manageable
and it's not that complicated to user. I will try both and see which works
better
for me.
Tony
From: Anthony D'Atri
Sent: April 21, 2022 09:02 PM
To: Tony Liu
Cc: ceph-users@ceph.i
Thanks, I will follow this PR.
I hope it will be ready for the next patch version of Octopus or Pacific.
Cegid est susceptible d'effectuer un traitement sur vos données personnelles à
des fins de gestion de notre relation commerciale. Pour plus d'information,
consultez https://www.cegid.com/fr/
Hi,
They are either static (so when the manager moves they become dead)
or dynamic (so they will be overwritten the moment the mgr moves),
aren't they?
there might be a misunderstanding but the MGR failover will just
redirect your dashboard access to the new active MGR. You can set that
25 matches
Mail list logo