Quoting Ignacio Ocampo (naf...@gmail.com):
> Hi Ceph Community (I'm new here :),
Welcome!
> Do you have any guidance on how to proceed with this? I'm trying to
> understand why the cluster is HEALTH_WARN and what I need to do in order to
> make it health again.
This might be because there is no
Hi,
Installing ceph from the debian unstable repository (ceph version 14.2.6
(f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable), debian
package: 14.2.6-6) has fixed things form me.
(See also the bug report and the duplicate of it and the changelog of
14.2.6-6
- bauen1
On 1/30/20
Jan,
In trying to recover my OSDs after the upgrade from Nautilus described
earlier, I eventually managed to make things worse to the point where I'm
going to scrub and fully reinstall. So I zapped all of the devices on one
of my nodes and reproduced the ceph-volume lvm create error I mentioned
e
On 1/29/20 6:03 PM, Frank Schilder wrote:
I would like to (in this order)
- set the data pool for the root "/" of a ceph-fs to a custom value, say "P"
(not the initial data pool used in fs new)
- create a sub-directory of "/", for example "/a"
- mount the sub-directory "/a" with a client key wi
Hi Ceph Community (I'm new here :),
I'm learning Ceph in a Virtual Environment Vagrant/Virtualbox (I understand
this is far from a real environment in several ways, mainly performance,
but I'm ok with that at this point :)
I've 3 nodes, and after few *vagrant halt/up*, when I do *ceph -s*, I got
Jan,
I have something new on this topic. I had gone back to Debian 9
backports and Luminous (distro packages). I had all of my OSDs working
and I was about to deploy an MDS. But I noticed that the same Luminous
packages where in Debian 10 (not backports), so I upgraded my OS to
Debian 10.
Hi Joe,
Can you grab a wallclock profiler dump from the mgr process and share
it with us? This was useful for us to get to the root cause of the
issue in 14.2.5.
Quoting Mark's suggestion from "[ceph-users] High CPU usage by
ceph-mgr in 14.2.5" below.
If you can get a wallclock profiler on the m
Yes but we are offering our rbd volumes in another cloud product which can
enable them migrate their volumes to openstack when they want.
Sent from my iPhone
On 29 Jan 2020, at 18:38, Matthew H wrote:
You should have used separate pool name scemes for each OpenStack cluster..
You should have used separate pool name scemes for each OpenStack cluster..
From: tda...@hotmail.com
Sent: Wednesday, January 29, 2020 12:29 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Servicing multiple OpenStack clusters from the same
Ceph cluster
Hel
Hello,
We have recently deployed that and it's working fine. We have deployed
different keys for the different openstack clusters ofcourse and they are using
the same cinder/nova/glance pools.
The only risk is if a client from one openstack cluster creates a volume and
the id that will be gene
Hi,
On 29/01/2020 16:40, Paul Browne wrote:
> Recently we deployed a brand new Stein cluster however, and I'm curious
> whether the idea of pointing the new OpenStack cluster at the same RBD
> pools for Cinder/Glance/Nova as the Luminous cluster would be considered
> bad practice, or even potenti
Hello,
We have a medium-sized Ceph Luminous cluster that, up til now, has been the
RBD image backend solely for an OpenStack Newton cluster that's marked for
upgrade to Stein later this year.
Recently we deployed a brand new Stein cluster however, and I'm curious
whether the idea of pointing the
Modules that are normally enabled:
ceph mgr module ls | jq -r '.enabled_modules'
[
"dashboard",
"prometheus",
"restful"
]
We did test with all modules disabled, restarted the mgrs and saw no difference.
Joe
___
ceph-users mailing list -- ceph-use
Hi Dominic,
I should have mentioned that I've set noatime already.
I have not found any obvious other mount options that would contribute to
'write on read' behaviour..
Thx
Samy
> On 29 Jan 2020, at 15:43, dhils...@performair.com wrote:
>
> Sammy;
>
> I had a thought; since you say the FS h
Sammy;
I had a thought; since you say the FS has high read activity, but you're seeing
large write I/O... is it possible that this is related to atime (Linux last
access time)? If I remember my Linux FS basics, atime is stored in the file
entry for the file in the directory, and I believe dir
After having upgraded my ceph cluster from Luminous to Nautilus 14.2.6 ,
from time to time "ceph health detail" claims about some"Long heartbeat
ping times on front/back interface seen".
As far as I can understand (after having read
https://docs.ceph.com/docs/nautilus/rados/operations/monitoring/)
Hello,
Sorry, this should be
ceph osd pool application set cephfs_data cephfs data cephfs
ceph osd pool application set cephfs_metadata cephfs metadata cephfs
so that the json output looks like
"cephfs_data"
{
"cephfs": {
"data": "cephfs"
}
}
"cephfs_metadata
On 29/01/2020 10:24, Samy Ascha wrote:
> I've been running CephFS for a while now and ever since setting it up, I've
> seen unexpectedly large write i/o on the CephFS metadata pool.
>
> The filesystem is otherwise stable and I'm seeing no usage issues.
>
> I'm in a read-intensive environment, from
Hi,
I had looked at the output of `ceph health detail` which told me to search
for 'incomplete' in the docs.
Since that said to file a bug (and I was sure that filing a bug did not
help) I continued to purge the Disks that we hat overwritten and ceph then
did some magic and told me that the PGs w
I would like to (in this order)
- set the data pool for the root "/" of a ceph-fs to a custom value, say "P"
(not the initial data pool used in fs new)
- create a sub-directory of "/", for example "/a"
- mount the sub-directory "/a" with a client key with access restricted to "/a"
The client wil
Hi Stefan,
the proper Ceph way of sending log for developer analysis is
ceph-post-file but I'm not good in retrieving them from there...
Ideally I'd prefer to start with log snippets covering 20K lines prior
to crash. 3 or 4 of them. This wouldn't take so much space and you can
send them by
On 2020-01-29 01:19, jbardg...@godaddy.com wrote:
> We feel this is related to the size of the cluster, similarly to the
> previous report.
>
> Anyone else experiencing this and/or can provide some direction on
> how to go about resolving this?
What Manager modules are enabled on that node? Have
The core RADOS api will order these on the osd as it receives the
operations from clients, and nothing will break if you submit 2 in parallel.
I’m less familiar with the S3 interface but I believe appends there will be
ordered by the rgw daemon and so will be much slower. Or maybe it works the
sam
There should be docs on how to mark an OSD lost, which I would expect to be
linked from the troubleshooting PGs page.
There is also a command to force create PGs but I don’t think that will
help in this case since you already have at least one copy.
On Tue, Jan 28, 2020 at 5:15 PM Hartwig Hauschi
Hi!
I've been running CephFS for a while now and ever since setting it up, I've
seen unexpectedly large write i/o on the CephFS metadata pool.
The filesystem is otherwise stable and I'm seeing no usage issues.
I'm in a read-intensive environment, from the clients' perspective and
throughput fo
Hi,
Quoting Dan van der Ster (d...@vanderster.com):
> Maybe you're checking a standby MDS ?
Looks like it. Active does have performance metrics.
Thanks,
Stefan
--
| BIT BV https://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6 +31 318 648 688 / i...@bit
On Tue, Jan 28, 2020 at 08:03:35PM +0100, bauen1 wrote:
>Hi,
>
>I've run into the same issue while testing:
>
>ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9)
>nautilus (stable)
>
>debian bullseye
>
>Ceph was installed using ceph-ansible on a vm from the repo
>http://download.ceph.
27 matches
Mail list logo