Hi,
I've been testing out Luminous and I've noticed that at some point the
number of osds per nodes was limited by aio-max-nr. By default its set to
65536 in Ubuntu 16.04
Has anyone else experienced this issue?
fs.aio-nr currently sitting at 196608 with 48 osds.
I have 48 osd's per node so I've
gt;
> Cheers, Dan
>
>
> On Aug 30, 2017 9:53 AM, "Thomas Bennett" wrote:
> >
> > Hi,
> >
> > I've been testing out Luminous and I've noticed that at some point the
> number of osds per nodes was limited by aio-max-nr. By default its set to
>
1 03 15
> D-22423 Hamburg e-mail : ebl...@nde.ag
>
> Vorsitzende des Aufsichtsrates: Angelika Mozdzen
> Sitz und Registergericht: Hamburg, HRB 90934
> Vorstand: Jens-U. Mozdzen
>USt-IdNr. DE 814
e suddenly listed only one cephFS. Also the
> command "ceph fs status" doesn't return an error anymore but shows the
> corret output.
> I guess Ceph is indeed a self-healing storage solution! :-)
>
> Regards,
> Eugen
>
>
> Zitat von Thomas Bennett :
&
ame, the
> problem goes away! I would have though that the weights does not matter,
> since we have to choose 3 of these anyways. So I'm really confused over
> this.
>
> Today I also had to change
>
> item ldc1 weight 197.489
> item ldc2 weight 197.196
>
ht 5.458
> item osd.18 weight 5.458
> item osd.19 weight 5.458
> item osd.20 weight 5.458
> item osd.21 weight 5.458
> item osd.22 weight 5.458
> item osd.23 weight 5.458
> }
>
>
> Den 2018-01-26 kl. 08:45, skrev Thomas Benn
Hi Peter,
>From your reply, I see that:
1. pg 3.12c is part of pool 3.
2. The osd's in the "up" for pg 3.12c are: 6, 0, 12.
I suggest to check on this 'activating' issue do the following:
1. What is the rule that pool 3 should follow, 'hybrid', 'nvme' or
'hdd'? (Use the *ceph osd
Hi Peter,
Relooking at your problem, you might want to keep track of this issue:
http://tracker.ceph.com/issues/22440
Regards,
Tom
On Wed, Jan 31, 2018 at 11:37 AM, Thomas Bennett wrote:
> Hi Peter,
>
> From your reply, I see that:
>
>1. pg 3.12c is part of pool 3.
>
Hi,
In trying to understand RGW pool usage I've noticed the pool called
*default.rgw.meta* pool has a large number of objects in it. Suspiciously
about twice as many objects in my *default.rgw.buckets.index* pool.
As I delete and add buckets, the number of objects in both pools decrease
and incre
Hi Orit,
Thanks for the reply, much appreciated.
You cannot see the omap size using rados ls but need to use rados omap
> commands.
You can use this script to calculate the bucket index size:
> https://github.com/mkogan1/ceph-utils/blob/master/
> scripts/get_omap_kv_size.sh
Great. I had not e
Hi,
We have a use case where we are reading 128MB objects off spinning disks.
We've benchmarked a number of different hard drive and have noticed that
for a particular hard drive, we're experiencing slow reads by comparison.
This occurs when we have multiple readers (even just 2) reading objects
ter at combining requests before they get to the
> drive?
>
> k8
>
> On Tue, Nov 29, 2016 at 9:52 AM Thomas Bennett wrote:
>
>> Hi,
>>
>> We have a use case where we are reading 128MB objects off spinning disks.
>>
>> We've benchmarked a number of dif
gt; Of *Kate Ward
> *Sent:* Tuesday, November 29, 2016 2:02 PM
> *To:* Thomas Bennett
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] Is there a setting on Ceph that we can use to
> fix the minimum read size?
>
>
>
> I have no experience with XFS, but
o, so you don’t even have to rely on Ceph to avoid
> downtime. I probably wouldn’t run it everywhere at once though for
> performance reasons. A single OSD at a time would be ideal, but that’s a
> matter of preference.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@li
ey're just not included in the package
distribution. Is this the desired behaviour or a misconfiguration?
Cheers,
Tom
--
Thomas Bennett
SARAO
Science Data Processing
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
the version to a
> folder and you can create a repo file that reads from a local directory.
> That's how I would re-install my test lab after testing an upgrade
> procedure to try it over again.
>
> On Tue, Aug 28, 2018, 1:01 AM Thomas Bennett wrote:
>
>> Hi,
>>
London, NW1 2BE
> <https://maps.google.com/?q=215+Euston+Road,+London,+NW1+2BE&entry=gmail&source=g>
> .
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.c
Hi,
I'm running Luminous 12.2.5 and I'm testing cephfs.
However, I seem to have too many active mds servers on my test cluster.
How do I set one of my mds servers to become standby?
I've run ceph fs set cephfs max_mds 2 which set the max_mds from 3 to 2 but
has no effect on my running configura
Hi Patric,
Thanks! Much appreciate.
On Tue, 15 May 2018 at 14:52, Patrick Donnelly wrote:
> Hello Thomas,
>
> On Tue, May 15, 2018 at 2:35 PM, Thomas Bennett wrote:
> > Hi,
> >
> > I'm running Luminous 12.2.5 and I'm testing cephfs.
> >
> > Ho
Hi,
I'm testing out ceph_vms vs a cephfs mount with a cifs export.
I currently have 3 active ceph mds servers to maximise throughput and
when I have configured a cephfs mount with a cifs export, I'm getting
a reasonable benchmark results.
However, when I tried some benchmarking with the ceph_v
rifice that I'm willing to take for the
convenience of it preconfigured.
Cheers,
Tom
--
Thomas Bennett
SRAO
Storage Engineer - Science Data Processing
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ket 30
times in 8 hours as we will write ~3 million objects in ~8 hours.
Hence the idea that we should preshard to avoid any undesirable workloads.
Cheers,
Tom
On Wed, Jun 27, 2018 at 3:16 PM, Matthew Vernon wrote:
> Hi,
>
> On 27/06/18 11:18, Thomas Bennett wrote:
>
> > We h
ike to attend, please complete the
following form to register: https://goo.gl/forms/imuP47iCYssNMqHA2
Kind regards,
SARAO storage team
--
Thomas Bennett
SARAO
Science Data Processing
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
Hi,
Does anyone out there use bigger than default values for rgw_max_chunk_size
and rgw_obj_stripe_size?
I'm planning to set rgw_max_chunk_size and rgw_obj_stripe_size to 20MiB,
as it suits our use case and from our testing we can't see any obvious
reason not to.
Is there some convincing experi
Does anyone know what these parameters are for. I'm not 100% sure I
understand what a window is in context of rgw objects:
- rgw_get_obj_window_size
- rgw_put_obj_min_window_size
The code points to throttling I/O. But some more info would be useful.
Kind regards,
Tom
__
t at the cost of increased memory usage.
>
> On 7/30/19 10:36 AM, Thomas Bennett wrote:
> > Does anyone know what these parameters are for. I'm not 100% sure I
> > understand what a window is in context of rgw objects:
> >
> > * rgw_get_obj_window_size
&
situations? A OSD
> blocking queries in a RBD scenario is a big deal, as plenty of VMs will
> have disk timeouts which can lead to the VM just panicking.
>
>
>
> Thanks!
>
> Xavier
>
>
> ___
> ceph-users mailing list
&
s (20M / 5)
> - bluestore_min_alloc_size_hdd = bluefs_alloc_size = 1M
> - rados subobject can be written in 4 extents each containing 4 ec
> stripe units
>
>
>
> On 30.07.19 17:35, Thomas Bennett wrote:
> > Hi,
> >
> > Does anyone out there use bigger than default
28 matches
Mail list logo