Hi,
I've set up a Luminous RGW with Keystone integration, and subsequently set
rgw keystone implicit tenants = true
So now all newly created users/tenants (or old ones that never accessed
RGW) get their own namespaces. However there are some pre-existing users
that have created buckets and objec
I'm in the process of updating some development VMs that use ceph-fs. It
looks like recent updates to ceph have deprecated the 'ceph-deploy osd
prepare' and 'activate' commands in favour of the previously-optional
'create' command.
We're using filestore OSDs on these VMs, but I can't seem to figu
On 14:26 Dec 03, Mike Perez wrote:
> Hey Cephers!
>
> Just wanted to give a heads up on the CentOS Dojo at Oak Ridge, Tennessee on
> Tuesday, April 16th, 2019.
>
> The CFP is now open, and I would like to encourage our community to
> participate
> if you can make the trip. Talks involving deploy
Hey Cephers!
Just wanted to give a heads up on the CentOS Dojo at Oak Ridge, Tennessee on
Tuesday, April 16th, 2019.
The CFP is now open, and I would like to encourage our community to participate
if you can make the trip. Talks involving deploying Ceph with the community
Ceph Ansible playbooks w
Hi,
On 12/3/18 4:21 PM, Athanasios Panterlis wrote:
> Hi Wido,
>
> Yeap its quite old, since 2016. Its from a decommissioned cluster that
> we just keep in healthy state without much update efforts.
> I had in plan to do a clean up of unwanted disks snapshots etc, do a few
> re-weights, update it
There's unfortunately a difference between an osd with weight 0 and
removing one item (OSD) from the crush bucket :(
If you want to remove the whole cluster completely anyways: either
keep it as down+out in the CRUSH map, i.e., just skip the last step.
Or just purge the OSD without setting it to o
On Mon, Dec 3, 2018 at 5:00 PM Jan Kasprzak wrote:
>
> Dan van der Ster wrote:
> : It's not that simple see http://tracker.ceph.com/issues/21672
> :
> : For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
> : updated -- so the rpms restart the ceph.target.
> : What's worse is tha
Dan van der Ster wrote:
: It's not that simple see http://tracker.ceph.com/issues/21672
:
: For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
: updated -- so the rpms restart the ceph.target.
: What's worse is that this seems to happen before all the new updated
: files are in
Hi,
Currently I am decommissioning an old cluster.
For example, I want to remove OSD Server X with all its OSD's.
I am following these steps for all OSD's of Server X:
- ceph osd out
- Wait for rebalance (active+clean)
- On OSD: service ceph stop osd.
Once the steps above are performed, the f
Hi Wido,
Yeap its quite old, since 2016. Its from a decommissioned cluster that we just
keep in healthy state without much update efforts.
I had in plan to do a clean up of unwanted disks snapshots etc, do a few
re-weights, update it to latest stable (just like correctly you mentioned) and
then
There's also an additional issue which made us activate
CEPH_AUTO_RESTART_ON_UPGRADE=yes
(and of course, not have automatic updates of Ceph):
When using compression e.g. with Snappy, it seems that already running OSDs
which try to dlopen() the snappy library
for some version upgrades become u
It's not that simple see http://tracker.ceph.com/issues/21672
For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
updated -- so the rpms restart the ceph.target.
What's worse is that this seems to happen before all the new updated
files are in place.
Our 12.2.8 to 12.2.10 upgrad
FYI -- that "entries_behind_master=175226727" bit is telling you that
it has only mirrored about 80% of the recent changes from primary to
non-primary.
Was the filesystem already in place? Are their any partitions/LVM
volumes in-use on the device? Did you map the volume read-only?
On Tue, Nov 27,
Hi,
How old is this cluster? As this might be a CRUSH tunables issue where
this pops up.
You can try (might move a lot of data!)
$ ceph osd getcrushmap -o crushmap.backup
$ ceph osd crush tunables optimal
If things go wrong you always have the old CRUSHmap:
$ ceph osd setcrushmap -i crushmap.b
Hi all,
I am managing a typical small ceph cluster that consists of 4 nodes with each
one having 7 OSDs (some in hdd pool, some in ssd pool)
Having a healthy cluster and following some space issues due to bad pg
management from ceph, I tried some reweighs in specific OSDs. Unfortunately the
re
Paul Emmerich wrote:
: Upgrading Ceph packages does not restart the services -- exactly for
: this reason.
:
: This means there's something broken with your yum setup if the
: services are restarted when only installing the new version.
Interesting. I have verified that I have
CEPH_AUTO_
I've recently added a host to my ceph cluster, using proxmox 'helpers'
to add OSD, eg:
pveceph createosd /dev/sdb -journal_dev /dev/sda5
and now i've:
root@blackpanther:~# ls -la /var/lib/ceph/osd/ceph-12
totale 60
drwxr-xr-x 3 root root 199 nov 21 17:02 .
drwxr-xr-x 6 root r
Upgrading Ceph packages does not restart the services -- exactly for
this reason.
This means there's something broken with your yum setup if the
services are restarted when only installing the new version.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://cr
Hello, ceph users,
I have a small(-ish) Ceph cluster, where there are osds on each host,
and in addition to that, there are mons on the first three hosts.
Is it possible to upgrade the cluster to Luminous without service
interruption?
I have tested that when I run "yum --enablerepo Ceph u
Hello,
we are fighting a HDD spin-down problem on our production ceph cluster
since two weeks now. The problem is not ceph related but I guess this
topic is interesting to the list and to be honest I hope to find a
solution here.
We do use 6 OSD Nodes like:
OS: Suse 12 SP3
Ceph: SES 5.5 (12.
Could this include the "Zabbix" module in ceph-mgr?
Cheers,
Martin
Am 30.11.18, 17:26 schrieb "Paul Emmerich" :
radosgw-admin likes to create these pools, some monitoring tool might
be trying to use it?
Paul
--
Paul Emmerich
Looking for help with yo
21 matches
Mail list logo