Hi David,
How many nodes in your cluster? k+m has to be smaller than your node count,
preferably by at least two.
How important is your data? i.e. do you have a remote mirror or backup, if not
you may want m=3
We use 8+2 on one cluster, and 6+2 on another.
Best,
Jake
On 7 July 2019 19:01
Hi David,
I'm running a cluster with bluestore on raw devices (no lvm) and all journals
collocated on the same disk with the data. Disks are spinning NL-SAS. Our goal
was to build storage at lowest cost, therefore all data on HDD only. I got a
few SSDs that I'm using for FS and RBD meta data. A
Morning all,
since Luminous/Mimic, Ceph supports allow_ec_overwrites. However, this has a
performance impact that looks even worse than what I'd expect from a
Read-Modify-Write cycle.
https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/ also
mentions that the small writes would read
On Sun, Jul 7, 2019 at 10:30 PM Kai Stian Olstad
wrote:
> On 06.07.2019 16:43, Ashley Merrick wrote:
> > Looking at the possibility of upgrading my personal storage cluster from
> > Ubuntu 18.04 -> 19.04 to benefit from a newer version of the Kernel e.t.c
>
> For a newer kernel install HWE[1], at
Hi Lars,
Is there a specific bench result you're concerned about?
I would think that small write perf could be kept reasonable thanks to
bluestore's deferred writes.
FWIW, our bench results (all flash cluster) didn't show a massive
performance difference between 3 replica and 4+2 EC.
I agree abou
On 2019-07-08T12:25:30, Dan van der Ster wrote:
> Is there a specific bench result you're concerned about?
We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a
pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that is
rather harsh, even for EC.
> I would think tha
[cid:2C0147887E4A1FC2BDB6863AC5A80EEE]
HI:
Ceph -s shows that usage is 385GB after I delete my pools . Do you know
why? Anyone can help me?
Thank you!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ce
Hello - Is ceph rbd support multi attach volumes (with ceph luminous
version0)?
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Frank,
Thanks for sharing valuable experience.
Frank Schilder 于2019年7月8日周一 下午4:36写道:
> Hi David,
>
> I'm running a cluster with bluestore on raw devices (no lvm) and all
> journals collocated on the same disk with the data. Disks are spinning
> NL-SAS. Our goal was to build storage at lowest
On 08/07/2019 13:02, Lars Marowsky-Bree wrote:
On 2019-07-08T12:25:30, Dan van der Ster wrote:
Is there a specific bench result you're concerned about?
We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a
pool that could do 3 GiB/s with 4M blocksize. So, yeah, well, that
On Mon, Jul 8, 2019 at 1:02 PM Lars Marowsky-Bree wrote:
>
> On 2019-07-08T12:25:30, Dan van der Ster wrote:
>
> > Is there a specific bench result you're concerned about?
>
> We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a
> pool that could do 3 GiB/s with 4M blocksize. S
Hi folks,
I want to use the community repository http://download.ceph.com/debian-luminous
for my luminous cluster instead of the packages provided by ubuntu itself. But
apparently only the ceph-deploy package is available for bionic (Ubuntu 18.04).
All packages exist for trusty though. Is this
On Mon, Jul 8, 2019 at 8:33 AM M Ranga Swami Reddy wrote:
>
> Hello - Is ceph rbd support multi attach volumes (with ceph luminous
> version0)?
Yes, you just need to ensure the exclusive-lock and dependent features
are disabled on the image. When creating a new image, you can use the
"--image-sh
On 2019-07-08T14:36:31, Maged Mokhtar wrote:
Hi Maged,
> Maybe not related, but we find with rbd, random 4k write iops start very low
> at first for a new image and then increase over time as we write. If we
> thick provision the image it work does not show this. This happens on random
> small b
Thanks Jason.
Btw, we use Ceph with OpenStack Cinder and Cinder Release (Q and above)
supports multi attach. can we use the OpenStack Cinder with Q release with
Ceph rbd for multi attach functionality?
Thanks
Swami
___
ceph-users mailing list
ceph-users@
On Mon, Jul 8, 2019 at 10:07 AM M Ranga Swami Reddy
wrote:
>
> Thanks Jason.
> Btw, we use Ceph with OpenStack Cinder and Cinder Release (Q and above)
> supports multi attach. can we use the OpenStack Cinder with Q release with
> Ceph rbd for multi attach functionality?
I can't speak to the Ope
Mike,
Do you know if the slides from the presentations at Ceph Day Netherlands
will be made available? (and if yes, where to find them?)
Kind regards,
Caspar Smit
Op wo 29 mei 2019 om 16:42 schreef Mike Perez :
> Hi everyone,
>
> This is the last week to submit for the Ceph Day Netherlands CFP
Hi guys,
Hope this link help you. Until Ocata, cinder driver does not support
multi-attach on ceph.
https://docs.openstack.org/cinder/latest/reference/support-matrix.html#operation_multi_attach
El lun., 8 de jul. de 2019 a la(s) 09:13, Jason Dillaman (
jdill...@redhat.com) escribió:
> On Mon,
This is very interesting, thank you. I'm curious, what is the reason
for avoiding k's with large prime factors? If I set k=5, what happens?
On Mon, Jul 8, 2019 at 8:56 AM Lei Liu wrote:
>
> Hi Frank,
>
> Thanks for sharing valuable experience.
>
> Frank Schilder 于2019年7月8日周一 下午4:36写道:
>>
>> Hi D
Just seeing if anybody has seen this? About 15 more OSDs have failed since
then. The cluster can't backfill fast enough, and I fear data loss may be
imminent. I did notice one of the latest ones to fail, has lines
similar to this one right before the crash
2019-07-08 15:18:56.170 7fc7324757
Good day,
We have a sizeable ceph deployment and use object-storage heavily. We
also integrate our object-storage with OpenStack but sometimes we are
required to create S3 keys for some of our users (aws-cli, java apps
that speak s3, etc). I was wondering if it is possible to see an audit
trail of
Hi Brett,
looks like BlueStore is unable to allocate additional space for BlueFS
at main device. It's either lacking free space or it's too fragmented...
Would you share osd log, please?
Also please run "ceph-bluestore-tool --path path-to-osd!!!> bluefs-bdev-sizes" and share the output.
Tha
I should read call stack more carefully... It's not about lacking free
space - this is rather the bug from this ticket:
http://tracker.ceph.com/issues/40080
You should upgrade to v14.2.2 (once it's available) or temporarily
switch to stupid allocator as a workaround.
Thanks,
Igor
On 7/
I'll give that a try. Is it something like...
ceph tell 'osd.*' bluestore_allocator stupid
ceph tell 'osd.*' bluefs_allocator stupid
And should I expect any issues doing this?
On Mon, Jul 8, 2019 at 1:04 PM Igor Fedotov wrote:
> I should read call stack more carefully... It's not about lackin
That's very likely just metadata.
How many OSDs do you have? Minimum pre-allocated size for metadata is
around 1 GB per OSD. Could be more allocated but not yet in use space after
deleting pools.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
cro
On Mon, Jul 8, 2019 at 2:42 PM Maged Mokhtar wrote:
>
> On 08/07/2019 13:02, Lars Marowsky-Bree wrote:
> > On 2019-07-08T12:25:30, Dan van der Ster wrote:
> >
> >> Is there a specific bench result you're concerned about?
> > We're seeing ~5800 IOPS, ~23 MiB/s on 4 KiB IO (stripe_width 8192) on a
On 2019-07-08T19:37:13, Paul Emmerich wrote:
> object_map can be a bottleneck for the first write in fresh images
We're working with CephFS here.
--
SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah, HRB 21284 (AG
Nürnberg)
"Architects should open possibilities and not determi
Thank you for your reply.
I have 84 osds. 7 ssd as cache tier for pool cache . 77 hdd as strorage pool.
原始邮件
发件人: Paul Emmerich
收件人: 刘亮
抄送: ceph-users
发送时间: 2019年7月9日(周二) 01:35
主题: Re: [ceph-users] Ceph -s shows that usage is 385GB after I delete my pools
That's very likely just metadata.
Ho
I'll give that a try. Is it something like...
ceph tell 'osd.*' bluestore_allocator stupid
ceph tell 'osd.*' bluefs_allocator stupid
And should I expect any issues doing this?
You should place this to ceph.conf and restart your osds.
Otherwise, this should fix new bitmap allocator issue via s
29 matches
Mail list logo