I was wondering if this is provided somehow? All I see is rbd and radosgw
mentioned. If you have applications built with librados surely openstack
must have a way to provide it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an em
I kind of doubt this will provide much of an advantage, I think recovery is
the only time you might have some chance of speedup, but i'm not sure
network throughput is always the bottleneck. There was some discussion a
while back about this, client IO is still going to be impacted by recovery.
O
can you just do a kickstart and use ceph-ansible?
On Tue, Sep 17, 2019 at 9:59 AM Paul Emmerich
wrote:
> The best tool to automate both OS and Ceph deployment is ours:
> https://croit.io/
>
> Check out our demo: https://croit.io/croit-virtual-demo
>
> Paul
>
> --
> Paul Emmerich
>
> Looking fo
anged with the following commands:
$ ceph config set mgr mgr/dashboard/$name/server_addr $IP
$ ceph config set mgr mgr/dashboard/$name/server_port $PORT
https://docs.ceph.com/docs/mimic/mgr/dashboard/
On Tue, Sep 17, 2019 at 1:59 AM Lenz Grimmer wrote:
> On 9/17/19 9:21 AM, solarflow
ttps://docs.ceph.com/docs/mimic/mgr/dashboard/> (To
> get the dashboard up and running quickly, you can generate and install a
> self-signed certificate using the following built-in command).
>
> Regards
> Thomas
>
> Am 17.09.2019 um 09:12 schrieb Robert Sander:
> >
I have mimic installed and for some reason the dashboard isn't showing up.
I see which mon is listed as active for "mgr", the module is enabled, but
nothing is listening on port 8080:
# ceph mgr module ls
{
"enabled_modules": [
"dashboard",
"iostat",
"status"
tcp
dicks are expected to fail, and every once in a while i'll lose one, so
that was expected and didn't come as any surprise to me. Are you
suggesting failed drives almost always stay down and out?
On Thu, Sep 5, 2019 at 11:13 AM Ashley Merrick
wrote:
> I would suggest checking the logs and seein
no, I mean ceph sees it as a failure and marks it out for a while
On Thu, Sep 5, 2019 at 11:00 AM Ashley Merrick
wrote:
> Is your HD actually failing and vanishing from the OS and then coming back
> shortly?
>
> Or do you just mean your OSD is crashing and then restarting it self
> shortly later
One of the things i've come to notice is when HDD drives fail, they often
recover in a short time and get added back to the cluster. This causes the
data to rebalance back and forth, and if I set the noout flag I get a
health warning. Is there a better way to avoid this?
_
how about also increasing osd_recovery_threads?
On Wed, Sep 4, 2019 at 10:47 AM Guilherme Geronimo <
guilherme.geron...@gmail.com> wrote:
> Hey hey,
>
> First of all: 10GBps connection.
>
> Then, some magic commands:
>
> # ceph tell 'osd.*' injectargs '--osd-max-backfills 32'
> # ceph tell 'osd.
flag on a specific
> OSD, which is much safer.
>
> Best regards,
>
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: ceph-users on behalf of
> solarflow99
> Sent: 03 September 201
11 matches
Mail list logo