Hi, i had the default - so it was on(according to ceph kb). turned it
off, but the issue persists. i noticed Bryan Stillwell(cc-ing him) had
the same issue (reported about it yesterday) - tried his tips about
compacting, but it doesn't do anything, however i have to add to his
last point, this
I wonder if anyone could offer any insight on the issue below, regarding
the CentOS 7.6 kernel cephfs client connecting to a Luminous cluster. I
have since tried a much newer 4.19.13 kernel, which did not show the
same issue (but unfortunately for various reasons unrelated to ceph, we
can't go
After update to CentOS 7.6, libvirt was updated from 3.9 to 4.5.
Executing: "virsh vol-list ceph --details" makes libvirtd using 300% CPU
for 2 minutes to show volumes on rbd. Quick pick at tcpdump shows
accessing rbd_data.* which previous version of libvirtd did not need.
Ceph version is 12.2.7.
Hello,
This is with RDO CentOS7, keystone and swift_account_in_url. The CEPH cluster
runs luminous.
curl 'https://object.example.org/swift/v1/AUTH_12345qhexvalue/test20_segments'
this list the contents of the public bucket (Read ACL is .r:* according to
swift stat test20_segments ) with 10.2.
Thanks, Sage! That did the trick.
Wido, seems like an interesting approach but I wasn't brave enough to
attempt it!
Eric, I suppose this does the same thing that the crushtool reclassify
feature does?
Thank you both for your suggestions.
For posterity:
- I grabbed some 14.0.1 packages, extrac
Josef,
I've noticed that when dynamic resharding is on it'll reshard some of our
bucket indices daily (sometimes more). This causes a lot of wasted space in
the .rgw.buckets.index pool which might be what you are seeing.
You can get a listing of all the bucket instances in your cluster with th
Hi All
Luminous 12.2.12
Single MDS
Replicated pools
A 'df' on a CephFS kernel client used to show me the usable space (i.e the
raw space with the replication overhead applied). This was when I just had
a single cephfs data pool.
After adding a second pool to the mds and using file layouts to map
It is true for all distros. It doesn't happen the first time either. I
think it is a bit dangerous.
On 1/3/19 12:25 AM, Ashley Merrick wrote:
Have just run an apt update and have noticed there are some CEPH
packages now available for update on my mimic cluster / ubuntu.
Have yet to install t
If you can wait a few weeks until the next release of luminous there
will be tooling to do this safely. Abhishek Lekshmanan of SUSE
contributed the PR. It adds some sub-commands to radosgw-admin:
radosgw-admin reshard stale-instances list
radosgw-admin reshard stale-instances rm
If you do
On Fri, Jan 4, 2019 at 1:53 AM David C wrote:
>
> Hi All
>
> Luminous 12.2.12
> Single MDS
> Replicated pools
>
> A 'df' on a CephFS kernel client used to show me the usable space (i.e the
> raw space with the replication overhead applied). This was when I just had a
> single cephfs data pool.
>
Nautilus will make this easier.
https://github.com/ceph/ceph/pull/18096
On Thu, Jan 3, 2019 at 5:22 AM Bryan Stillwell wrote:
>
> Recently on one of our bigger clusters (~1,900 OSDs) running Luminous
> (12.2.8), we had a problem where OSDs would frequently get restarted while
> deep-scrubbing.
Hi,
Recently I tried adding a new node (OSD) to ceph cluster using ceph-deploy
tool. Since I was experimenting with tool and ended up deleting OSD nodes
on new server couple of times.
Now since ceph OSDs are running on new server cluster PGs seems to be
inactive (10-15%) and they are not recoveri
If you added OSDs and then deleted them repeatedly without waiting for
replication to finish as the cluster attempted to re-balance across them,
its highly likely that you are permanently missing PGs (especially if the
disks were zapped each time).
If those 3 down OSDs can be revived there is
Hi,
I'm currently doing cephfs backup, through a dedicated clients mounting the
whole filesystem at root.
others clients are mounting part of the filesystem. (kernel cephfs clients)
I have around 22millions inodes,
before backup, I have around 5M caps loaded by clients
#ceph daemonperf mds.x
Hi Chris,
Indeed that's what happened. I didn't set noout flag either and I did
zapped disk on new server every time. In my cluster status fre201 is only
new server.
Current Status after enabling 3 OSDs on fre201 host.
[root@fre201 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWE
>
> Does anybody have a suggestion of what I could try to troubleshoot this?
Upgrading to Luminous also "solves the issue". I'll look into that :)
// Johan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
Konstantin,
Thanks for reply. I've managed to unravel it partially. Somehow (did not
look into srpm) starting from this version libvirt started to calculate
real allocation if fastdiff feature is present on image. Doing "rbd
object-map rebuild" on every image helped (do not know why it was nee
17 matches
Mail list logo