Thanks for the added info, appreciate it.
- Vlad
On Tue, Sep 10, 2019 at 5:37 PM Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:
> We have a single ceph cluster used by 2 openstack installations.
>
> We use different ceph pools for the 2 openstack clusters.
> For nova, cinder and glan
On Wed, Sep 4, 2019 at 6:39 AM Guilherme wrote:
>
> Dear CEPHers,
> Adding some comments to my colleague's post: we are running Mimic 13.2.6 and
> struggling with 2 issues (that might be related):
> 1) After a "lack of space" event we've tried to remove a 40TB file. The file
> is not there anym
On Wed, Sep 11, 2019 at 6:51 AM Kenneth Waegeman
wrote:
>
> We sync the file system without preserving hard links. But we take
> snapshots after each sync, so I guess deleting files which are still in
> snapshots can also be in the stray directories?
>
> [root@mds02 ~]# ceph daemon mds.mds02 perf
All;
I found the problem, it was an identity issue.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
dhils...@performair.com
www.PerformAir.com
-Original Message-
From: dhils...@performair.com [mailto:dhils...@performair.com]
Sent
Alexander;
What is your operating system?
Is it possible that the dashboard module isn't installed?
I've run into "Error ENOENT: all mgr daemons do not support module 'dashboard'"
on my CentOS 7 machines, where the module is a separate package (I had to use
"yum install ceph-mgr-dashboard" to
All;
We're trying to add a RADOSGW instance to our new production cluster, and it's
not showing in the dashboard, or in ceph -s.
The cluster is running 14.2.2, and the RADOSGW got 14.2.3.
systemctl status ceph-radosgw@ rgw.s700037 returns: active (running).
ss -ntlp does NOT show port 80.
Her
On Tue, Sep 3, 2019 at 3:39 PM Guilherme wrote:
>
> Dear CEPHers,
> Adding some comments to my colleague's post: we are running Mimic 13.2.6 and
> struggling with 2 issues (that might be related):
> 1) After a "lack of space" event we've tried to remove a 40TB file. The file
> is not there anym
We sync the file system without preserving hard links. But we take
snapshots after each sync, so I guess deleting files which are still in
snapshots can also be in the stray directories?
[root@mds02 ~]# ceph daemon mds.mds02 perf dump | grep -i 'stray\|purge'
"finisher-PurgeQueue": {
Hello,
Running 14.2.3, updated from 14.2.1.
Until recently I've had ceph-mgr collocated with OSDs. I've installed
ceph-mgr on separate servers and everything looks OK in Ceph status
but there are multiple issues:
1. Dashboard only runs on old mgr servers. Tried restarting the
daemons and disable/
I am also getting this error msg in one node when other host is down.
ceph -s
Traceback (most recent call last):
File "/usr/bin/ceph", line 130, in
import rados
ImportError: libceph-common.so.0: cannot map zero-fill pages
On Tue, Sep 10, 2019 at 4:39 PM Amudhan P wrote:
> Its a test cl
Its a test cluster each node with a single OSD and 4GB RAM.
On Tue, Sep 10, 2019 at 3:42 PM Ashley Merrick
wrote:
> What's specs ate the machines?
>
> Recovery work will use more memory the general clean operation and looks
> like your maxing out the available memory on the machines during CEPH
Hi,
do you use hard links in your workload? The 'no space left on device'
message may also refer to too many stray files. Strays are either files
that are to be deleted (e.g. the purge queue), but also files which are
deleted, but hard links are still pointing to the same content. Since
ceph
What's specs ate the machines?
Recovery work will use more memory the general clean operation and looks like
your maxing out the available memory on the machines during CEPH trying to
recover.
On Tue, 10 Sep 2019 18:10:50 +0800 amudha...@gmail.com wrote
I have also found below e
I have also found below error in dmesg.
[332884.028810] systemd-journald[6240]: Failed to parse kernel command
line, ignoring: Cannot allocate memory
[332885.054147] systemd-journald[6240]: Out of memory.
[332894.844765] systemd[1]: systemd-journald.service: Main process exited,
code=exited, statu
This is almost inline with how I did it before.. and i was using Red Hat
OpenStack as well.
From: Dave Holland
Sent: Tuesday, September 10, 2019 5:32 AM
To: vladimir franciz blando
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: 2 OpenStack environment, 1 Ceph
We have a single ceph cluster used by 2 openstack installations.
We use different ceph pools for the 2 openstack clusters.
For nova, cinder and glance this is straightforward.
It was a bit more complicated fo radosgw. In this case the setup I used was:
- creating 2 realms (one for each cloud)
-
On Tue, Sep 10, 2019 at 05:14:34PM +0800, vladimir franciz blando wrote:
> I have 2 OpenStack environment that I want to integrate to an
> existing ceph cluster. I know technically it can be done but has
> anyone tried this?
Yes, it works fine. You need each OpenStack to have a different client
k
Hi,
I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs.
My current setup:
3 nodes, 1 node contain two bricks and other 2 nodes contain single brick
each.
Volume is a 3 replica, I am trying to simulate node failure.
I powered down one host and started getting msg in other sy
on 2019/9/10 17:14, vladimir franciz blando wrote:
I have 2 OpenStack environment that I want to integrate to an existing
ceph cluster. I know technically it can be done but has anyone tried this?
Sure you can. Ceph could be deployed as separate storage service,
openstack is just its cust
I have 2 OpenStack environment that I want to integrate to an existing ceph
cluster. I know technically it can be done but has anyone tried this?
- Vlad
ᐧ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...
Mandi! Konstantin Shalygin
In chel di` si favelave...
> > vfs objects = acl_xattr full_audit
[...]
> > vfs objects = ceph
> You have doubled `vfs objects` option, but this option is stackable and
> should be `vfs objects = acl_xattr full_audit ceph`, I think...
Yes, the latter overri
21 matches
Mail list logo