Hi Robert,
this is a bit less trivial than it might look right now. The ceph user is
usually created by installing the package ceph-common. By default it will use
id 167. If the ceph user already exists, I would assume it will use the
existing user to allow an operator to avoid UID collisions (
Hi
Recently, after few weeks of tests of Nautilus on our clusters we decided
to upgrade our oldest one (installed in 2012 as bobtail release). After
gateway upgrade we found that only for some buckets (40% from ~2000) the
same request is handled differently. With mimic RGW - OK (200), with
nautilu
Hi all,
I have a two ceph 13.2.6 clusters in multisite setup on HDD disks with ~466.0 M
objects and rather low usage: 63 MiB/s rd, 1.5 MiB/s wr, 978 op/s rd, 308 op/s
wr.
In each cluster there are two dedicated rgws for repliaction (setted as zone
endpoints, other rgws have "rgw run sync threa
Hi
on 2019/8/29 15:50, Płaza Tomasz wrote:
Is there anything to speed-up replication? Should I enable "rgw run sync
thread" on all rgws not just zone endpoints?
Did you check the network? A faster network connection should be helpful.
regards.
___
On 29.08.2019 16∶05 +0800, Wesley Peng wrote:
Hi
on 2019/8/29 15:50, Płaza Tomasz wrote:
Is there anything to speed-up replication? Should I enable "rgw run sync
thread" on all rgws not just zone endpoints?
Did you check the network? A faster network connection should be helpful.
Latency i
Hi,
I have created OSD on HDD w/o putting DB on faster drive.
In order to improve performance I have now a single SSD drive with 3.8TB.
I modified /etc/ceph/ceph.conf by adding this in [global]:
bluestore_block_db_size = 53687091200
This should create RockDB with size 50GB.
Then I tried to mov
Hi,
Then I tried to move DB to a new device (SSD) that is not formatted:
root@ld5505:~# ceph-bluestore-tool bluefs-bdev-new-db –-path
/var/lib/ceph/osd/ceph-76 --dev-target /dev/sdbk
too many positional options have been specified on the command line
I think you're trying the wrong option.
Sorry, I misread, your option is correct, of course since there was no
external db device.
This worked for me:
ceph-2:~ # CEPH_ARGS="--bluestore-block-db-size 1048576"
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-1 bluefs-bdev-new-db
--dev-target /dev/sdb
inferring bluefs devices from
I've just finished a double upgrade on my ceph (PVE-based) from hammer
to jewel and from jewel to luminous.
All went well, apart that... OSD does not restart automatically,
because permission troubles on the journal:
Aug 28 14:41:55 capitanmarvel ceph-osd[6645]: starting osd.2 at - osd_data
/
Hi all,
I'm investigating an issue with our (non-Ceph) caching layers of our large EC
cluster. It seems to be turning users requests for whole objects into lots of
small byte range requests reaching the OSDs, but I'm not sure how inefficient
this behaviour is in reality.
My limited understandi
Hi Folks,
I have found similar reports of this problem in the past but can't seem to find
any solution to it.
We have ceph filesystem running mimic version 13.2.5.
OSDs are running on AWS EC2 instances with centos 7. OSD disk is an AWS nvme
device.
Problem I, sometimes when rebooting an OSD in
Frank,
Thank you for the explanation, these are freshly installed machines and did
not have ceph on them. I checked one of the other OSD nodes and there is no
ceph user in /etc/passwd, nor is UID 167 allocated to any user. I did
install ceph-common from the 18.04 repos before realizing that deploy
See responses below.
> On Aug 28, 2019, at 11:13 PM, Konstantin Shalygin wrote:
>> Just a follow up 24h later, and the mgr's seem to be far more stable, and
>> have had no issues or weirdness after disabling the balancer module.
>>
>> Which isn't great, because the balancer plays an important r
13 matches
Mail list logo