Hi,
I have a fresh Nautilus Ceph cluster with radosgw as a front end. I've been
testing with a slightly modified version of
https://github.com/wasabi-tech/s3-benchmark/
I have 5 storage nodes with 4 osds each, for a total of 20 osds. I am
testing locally on a single rgw node. First, I uploade
Hi Everyone,
So it recently came to my attention that on one of our clusters, running
the command "radosgw-admin usage show" returns a blank response. What is
going on behind the scenes with this command, and why might it not be
seeing any of the buckets properly? The data is still accessible over
I think this might be related to a problem I'm having with "ceph osd
pool autoscale-status". SIZE appears to be raw usage (data * 3 in our
case) while TARGET SIZE seems to be expecting user-facing size. For
example, I have a 87TiB dataset that I'm currently copying into a
CephFS. "du -sh" shows tha
You have 4 OSDs that are near_full, and the errors seem to be pointed
to pg_create, possibly from a backfill. Ceph will stop backfills to
near_full osds. I'm guessing that is the cause of your blocked IO. Try
reweighting the full OSDs down to move PGs off them.
Robert LeBlanc
PGP F
Hi,
ceph status reports:
root@ld3955:~# ceph -s
cluster:
id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae
health: HEALTH_ERR
1 filesystem is degraded
1 filesystem has a failed mds daemon
1 filesystem is offline
insufficient standby MDS daemons a
Thanks for the replies,
So is NFS and Ganesha the way ceph(fs) is intended to be used for a public
cloud solution?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 07/10/19 13:06 +0200, Jaan Vaks wrote:
Hi all,
I'm evaluation cephfs to serve our business as a file share that span
across our 3 datacenters. One concern that I have is that when using cephfs
and OpenStack Manila is that all guest vms needs access to the public
storage net. This to me feels
On 10/7/19 6:06 PM, Jaan Vaks wrote:
I'm evaluation cephfs to serve our business as a file share that span
across our 3 datacenters. One concern that I have is that when using
cephfs and OpenStack Manila is that all guest vms needs access to the
public storage net. This to me feels like a secur
Don't give untrusted users access to CephFS directly, they can ruin your day.
For example, quotas are enforced client-side and there isn't a good
way to limit IOPS.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247
seems like hit this: https://tracker.ceph.com/issues/41190
展荣臻(信泰) 于2019年10月8日周二 上午10:26写道:
>
> >If the journal is no longer readable: the safe variant is to
> >completely re-create the OSDs after replacing the journal disk. (The
> >unsafe way to go is to just skip the --flush-journal part, not
>
10 matches
Mail list logo