Hi all,
I'm having trouble adding OSDs to a storage node; I've got about
28 OSDs running, but adding more fails.
Typical log excerpt:
2015-09-16 13:55:58.083797 7f3e7b821800 1 journal _open
/var/lib/ceph/osd/ceph-28/journal fd 20: 21474836480 bytes, block
size 4096 bytes, directio = 1, aio = 1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.09.15 16:41, Peter Sabaini wrote:
> Hi all,
>
> I'm having trouble adding OSDs to a storage node; I've got
> about 28 OSDs running, but adding more fails.
So, it seems the requisite knob was sysctl fs.aio-max-nr
By defaul
our opinion?
Thanks,
peter.
> Shinobu
>
> - Original Message - From: "Peter Sabaini"
> To: ceph-users@lists.ceph.com Sent:
> Thursday, September 17, 2015 11:51:11 PM Subject: Re:
> [ceph-users] ceph osd won't boot, resource shortage?
>
> On 16.0
In the Jewel relnotes there's an item
rgw: indexless (pr#7786, Yehuda Sadeh)
which I find intriguing. However, the linked PR is WIP and I can't find
any docs. The only thing that turned up for me is
https://bugzilla.redhat.com/show_bug.cgi?id=1314584 , with Status NEW
This mentions that index
On 2016-04-11 18:15, hp cre wrote:
> -- Forwarded message --
> From: "hp cre" mailto:hpc...@gmail.com>>
> Date: 11 Apr 2016 15:50
> Subject: Re: [ceph-users] Ubuntu xenial and ceph jewel systemd
> To: "James Page" mailto:james.p...@ubuntu.com>>
> Cc:
>
> Here is exactly what has be
What kind of commit/apply latency increases have you seen when adding a
large numbers of OSDs? I'm nervous how sensitive workloads might react
here, esp. with spinners.
cheers,
peter.
On 24.07.19 20:58, Reed Dier wrote:
> Just chiming in to say that this too has been my preferred method for
> add
On 26.07.19 15:03, Stefan Kooman wrote:
> Quoting Peter Sabaini (pe...@sabaini.at):
>> What kind of commit/apply latency increases have you seen when adding a
>> large numbers of OSDs? I'm nervous how sensitive workloads might react
>> here, esp. with spinners.
&