Hi Alfredo,
I want to have such LVs on NVME is for having the best performance. I have
read that having 4 OSDs per NVME is the best one. Beside, since there is
only one NVMe, I think that sacrificing small portion of NVME to accelerate
block.db of HDD OSDs is good too. About VG and LV naming, I th
On Fri, May 10, 2019 at 02:27:11PM +, Sage Weil wrote:
> If you are a Ceph developer who has contributed code to Ceph and object to
> this change of license, please let us know, either by replying to this
> message or by commenting on that pull request.
Am I correct in reading the diff that o
On Fri, May 10, 2019 at 3:21 PM Lazuardi Nasution
wrote:
>
> Hi Alfredo,
>
> Thank you for your answer, it is very helpful. Do you mean that
> --osds-per-device=3 is mistyped? It should be --osds-per-device=4 to create 4
> OSDs as expected, right? I'm trying to not create it by specifying manual
Hello,
thanks for your help.
Casey Bodley wrote:
: It looks like the default.rgw.buckets.non-ec pool is missing, which
: is where we track in-progress multipart uploads. So I'm guessing
: that your perl client is not doing a multipart upload, where s3cmd
: does by default.
:
: I'd recomm
Hi Alfredo,
Thank you for your answer, it is very helpful. Do you mean that
--osds-per-device=3
is mistyped? It should be --osds-per-device=4 to create 4 OSDs as expected,
right? I'm trying to not create it by specifying manually created LVM to
have consistent ceph way of VG and LV naming.
By the
On Fri, May 10, 2019 at 2:43 PM Lazuardi Nasution
wrote:
>
> Hi,
>
> Let's say I have following devices on a host.
>
> /dev/sda
> /dev/sdb
> /dev/nvme0n1
>
> How can I do ceph-volume batch which create bluestore OSD on HDDs and NVME
> (devided to be 4 OSDs) and put block.db of HDDs on the NVME to
Hi,
Let's say I have following devices on a host.
/dev/sda
/dev/sdb
/dev/nvme0n1
How can I do ceph-volume batch which create bluestore OSD on HDDs and NVME
(devided to be 4 OSDs) and put block.db of HDDs on the NVME too? Following
are what I'm expecting on created LVs.
/dev/sda: DATA0
/dev/sdb:
I'm setting up a new Ceph cluster with fast SSD drives, and there is
one problem I want to make sure to address straight away:
comically-low OSD queue depths.
On the past several clusters I built, there was one major performance
problem that I never had time to really solve, which is this:
regardl
Hi all,
What is the recommended way for Samba gateway integration: using
vfs_ceph or mounting CephFS via kernel client ? i tested the kernel
solution in a ctdb setup and gave good performance, does it have any
limitations relative to vfs_ceph ?
Cheers /Maged
__
On 5/10/19 10:20 AM, Jan Kasprzak wrote:
Hello Casey (and the ceph-users list),
I am returning to my older problem to which you replied:
Casey Bodley wrote:
: There is a rgw_max_put_size which defaults to 5G, which limits the
: size of a single PUT request. But in that case, the http
Hi everyone,
-- What --
The Ceph Leadership Team[1] is proposing a change of license from
*LGPL-2.1* to *LGPL-2.1 or LGPL-3.0* (dual license). The specific changes
are described by this pull request:
https://github.com/ceph/ceph/pull/22446
If you are a Ceph developer who has contribut
Ceph-ansible 3.2, rolling upgrade mimic -> nautilus. The ansible file sets
flag "norebalance". When there is*no* I/O to the cluster, upgrade works
fine. When upgrading with IO running in the background, some PG become
`active+undersized+remapped+backfilling`
Flag norebalance prevents them from ba
Hello Casey (and the ceph-users list),
I am returning to my older problem to which you replied:
Casey Bodley wrote:
: There is a rgw_max_put_size which defaults to 5G, which limits the
: size of a single PUT request. But in that case, the http response
: would be 400 EntityTooLarge. For m
Thanks for your reply, Janne!
I was estimating the theoretical maximum recover speed for erasure
coded pool. My calculation may not be accurate, hope it can be close
to the correct one and the list can help here.
As for hardware/software RAID, user can set the speed for rebuilding.
Some hardware
ceph.conf is preferred as a local override.
You can check which option a running daemon is actually using:
ceph daemon . config diff
This shows the settings it got from all sources and which one is actually
in use at the moment.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster?
Hey Kenneth,
We encountered this when the number of strays (unlinked files yet to be
purged) reached 1 million, which is a result of many many file removals
happening on the fs repeatedly. It can also happen when there are more than
100k files in a dir with default settings.
You can tune it via '
About how many files are we talking here?
Implementation detail on file deletion to understand why this might happen:
deletion is async, deleting a file inserts it into the purge queue and the
actual data will be removed in the background.
Paul
--
Paul Emmerich
Looking for help with your Ceph
Den fre 10 maj 2019 kl 14:48 skrev Poncea, Ovidiu <
ovidiu.pon...@windriver.com>:
> Oh... joy :) Do you know if, after replay, ceph-mon data will decrease or
> do we need to do some manual cleanup? Hopefully we don't keep it in there
> forever.
>
You get the storage back as soon as the situation
Oh... joy :) Do you know if, after replay, ceph-mon data will decrease or do we
need to do some manual cleanup? Hopefully we don't keep it in there forever.
From: Paul Emmerich [paul.emmer...@croit.io]
Sent: Thursday, May 09, 2019 1:36 PM
To: Janne Johanss
Den tors 9 maj 2019 kl 17:46 skrev Feng Zhang :
> Thanks, guys.
>
> I forgot the IOPS. So since I have 100disks, the total
> IOPS=100X100=10K. For the 4+2 erasure, one disk fail, then it needs to
> read 5 and write 1 objects.Then the whole 100 disks can do 10K/6 ~ 2K
> rebuilding actions per secon
thanks,
i will try to "backport" this to ubuntu 16.04
Ansgar
Am Do., 9. Mai 2019 um 12:33 Uhr schrieb Paul Emmerich :
>
> We maintain vfs_ceph for samba at mirror.croit.io for Debian Stretch and
> Buster.
>
> We apply a9c5be394da4f20bcfea7f6d4f5919d5c0f90219 on Samba 4.9 for
> Buster to fix thi
Hi all,
I am seeing issues on cephfs running 13.2.5 when deleting files:
[root@osd006 ~]# rm /mnt/ceph/backups/osd006.gigalith.os-2b5a3740.1326700
rm: remove regular empty file
‘/mnt/ceph/backups/osd006.gigalith.os-2b5a3740.1326700’? y
rm: cannot remove
‘/mnt/ceph/backups/osd006.gigalith.os-2b
yes, we recommend this as a precaution to get the best possible IO
performance for all workloads and usage scenarios. 512e doesn't bring any
advantage and in some cases can mean a performance disadvantage. By the
way, 4kN and 512e cost exactly the same at our dealers.
Whether this really makes a d
Hello,
I'm not yet sure if I'm allowed to share the files, but if you find one of
those, you can verify the md5sum.
27d2223d66027d8e989fc07efb2df514 hugo-6.8.0.i386.deb.zip
b7db78c3927ef3d53eb2113a4e369906 hugo-6.8.0.i386.rpm.zip
9a53ed8e201298de6da7ac6a7fd9dba0 hugo-6.8.0.i386.tar.gz.zip
2dea
Hi Cephs
We migrated the ceph.conf into the cluster's configuration database.
What information got preference once the daemon startup ceph.conf or
configuration database?
Is cluster configuration databases read in-live or we continue needing
restart daemons?
Regards
_
Note that the issue I am talking about here is how a "Virtual" Ceph RBD
disk is presented to a virtual guest, and specifically for Windows guests
(Linux guests are not affected). I am not at all talking about how the
physical disks are presented to Ceph itself (although Martin was, he wasn't
clear
Hmmm, so if I have (wd) drives that list this in smartctl output, I
should try and reformat them to 4k, which will give me better
performance?
Sector Sizes: 512 bytes logical, 4096 bytes physical
Do you have a link to this download? Can only find some .cz site with
the rpms.
-Orig
Hello Trent,
many thanks for the insights. We always suggest to use 4kN over 512e HDDs
to our users.
As we recently found out, is that WD Support offers a tool called HUGO to
reformat 512e to 4kN drives with "hugo format -m -n max
--fastformat -b 4096" in seconds.
Maybe that helps someone that h
On 10/05/2019 08:42, EDH - Manuel Rios Fernandez wrote:
> Hi
>
> Yesterday night we added 2 Intel Optane Nvme
>
> Generated 4 partitions for get the max performance (Q=32) of those monsters,
> total 8 Partitions of 50GB.
>
> Move the rgw.index pool got filled near 3GB .
>
> And...
>
> Still the
I recently was investigating a performance problem for a reasonably sized
OpenStack deployment having around 220 OSDs (3.5" 7200 RPM SAS HDD) with
NVMe Journals. The primary workload is Windows guests backed by Cinder RBD
volumes.
This specific deployment is Ceph Jewel (FileStore + SimpleMessenger)
30 matches
Mail list logo