On 05/19/2018 01:13 AM, Webert de Souza Lima wrote:
> New question: will it make any difference in the balancing if instead of
> having the MAIL directory in the root of cephfs and the domains's
> subtrees inside it, I discard the parent dir and put all the subtress right
> in cephfs root?
the ba
On Thu, May 17, 2018 at 6:06 PM, Uwe Sauter wrote:
> Brad,
>
> thanks for the bug report. This is exactly the problem I am having (log-wise).
You don't give any indication what version you are running but see
https://tracker.ceph.com/issues/23205
>>>
>>>
>>> the cluster is an Proxmo
So we have been testing this quite a bit, having the failure domain as
partially available is ok for us but odd, since we don't know what will be
down. Compared to a single MDS we know everything will be blocked.
It would be nice to have an option to have all IO blocked if it hits a
degraded state
Hi Patrick
On Fri, May 18, 2018 at 6:20 PM Patrick Donnelly
wrote:
> Each MDS may have multiple subtrees they are authoritative for. Each
> MDS may also replicate metadata from another MDS as a form of load
> balancing.
Ok, its good to know that it actually does some load balance. Thanks.
New
On Thu, May 17, 2018 at 9:05 AM Andras Pataki
wrote:
> I've been trying to wrap my head around crush rules, and I need some
> help/advice. I'm thinking of using erasure coding instead of
> replication, and trying to understand the possibilities for planning for
> failure cases.
>
> For a simplif
Is there any chance of sharing those slides when the meetup has finished?
It sounds interesting! :)
On Fri, May 18, 2018 at 6:53 AM Robert Sander
wrote:
> Hi,
>
> we are organizing a bi-monthyl meetup in Berlin, Germany and invite any
> interested party to join us for the next one on May 28:
>
>
On Fri, May 18, 2018 at 11:56 AM Webert de Souza Lima
wrote:
> Hello,
>
>
> On Mon, Apr 30, 2018 at 7:16 AM Daniel Baumann
> wrote:
>
>> additionally: if rank 0 is lost, the whole FS stands still (no new
>> client can mount the fs; no existing client can change a directory, etc.).
>>
>> my guess
You're doing 4K direct IOs on a distributed storage system and then
comparing it to what the local device does with 1GB blocks? :)
Try feeding Ceph with some larger IOs and check how it does.
-Greg
On Fri, May 18, 2018 at 1:22 PM Rhugga Harper wrote:
>
> We're evaluating persistent block provid
On 05/18/2018 11:19 PM, Patrick Donnelly wrote:
> So, you would want to have a standby-replay
> daemon for each rank or just have normal standbys. It will likely
> depend on the size of your MDS (cache size) and available hardware.
jftr, having 3 active mds and 3 standby-replay resulted May 20217
Hello Webert,
On Fri, May 18, 2018 at 1:10 PM, Webert de Souza Lima
wrote:
> Hi,
>
> We're migrating from a Jewel / filestore based cephfs archicture to a
> Luminous / buestore based one.
>
> One MUST HAVE is multiple Active MDS daemons. I'm still lacking knowledge of
> how it actually works.
> A
We're evaluating persistent block providers for Kubernetes and looking at
ceph at the moment.
We aren't seeing performance anywhere near what we expect.
I have a 50-node proof of concept cluster with 40 nodes available for
storage and configured with rook/ceph. Each has 10GB nics and 8 x 1TB
SSD'
Hi,
We're migrating from a Jewel / filestore based cephfs archicture to a
Luminous / buestore based one.
One MUST HAVE is multiple Active MDS daemons. I'm still lacking knowledge
of how it actually works.
After reading the docs and ML we learned that they work by sort of dividing
the responsibili
Hello,
On Mon, Apr 30, 2018 at 7:16 AM Daniel Baumann
wrote:
> additionally: if rank 0 is lost, the whole FS stands still (no new
> client can mount the fs; no existing client can change a directory, etc.).
>
> my guess is that the root of a cephfs (/; which is always served by rank
> 0) is nee
On Fri, May 18, 2018 at 9:55 AM, Marc Roos wrote:
>
> Should ceph osd status not be stdout?
Oops, that's a bug.
http://tracker.ceph.com/issues/24175
https://github.com/ceph/ceph/pull/22089
John
> So I can do something like this
>
> [@ ~]# ceph osd status |grep c01
>
> And don't need to do this
Should ceph osd status not be stdout?
So I can do something like this
[@ ~]# ceph osd status |grep c01
And don't need to do this
[@ ~]# ceph osd status 2>&1 |grep c01
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/list
+1
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Kai
Wagner
Sent: Thursday, May 17, 2018 4:20 PM
To: David Turner
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Increasing number of PGs by not a factor of two?
Great summary David. Wouldn't this be worth a bl
Hi,
we are organizing a bi-monthyl meetup in Berlin, Germany and invite any
interested party to join us for the next one on May 28:
https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/
The presented topic is "High available (active/active) NFS and CIFS
exports upon CephFS".
Kindest Regards
-
On Fri, May 18, 2018 at 3:25 PM, Donald "Mac" McCarthy
wrote:
> Ilya,
> Your recommendation worked beautifully. Thank you!
>
> Is this something that is expected behavior or is this something that should
> be filed as a bug.
>
> I ask because I have just enough experience with ceph at this poi
Ilya,
Your recommendation worked beautifully. Thank you!
Is this something that is expected behavior or is this something that should be
filed as a bug.
I ask because I have just enough experience with ceph at this point to be very
dangerous and not enough history to know if this was expecte
Hi Antonio - you need to set !requiretty in your sudoers file. This is
documented here:
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/ but it
appears that section may not have been copied into the current docs.
You can test this by running 'ssh sds@node1 sudo whoami' from your adm
That error is a sudo error, not an SSH error. Making root login possible
without password doesn't affect this at all. ceph-deploy is successfully
logging in as sds to node01, but is failing to be able to execute sudo
commands without a password. To fix that you need to use `visudo` to give
the s
I tried create new cluster ceph, but on the my first command, received this
erro in blue.
Searched on the gogle about this erro, but believe that is error of the
ssh, and dont of the ceph.
I tried:
alias ssh="ssh -t" on the admin node
I Modifyed the file
Host node01
Hostname node01.domain.loc
unsubscribe ceph-users
The information contained in this transmission may be confidential. Any
disclosure, copying, or further distribution of confidential information is not
permitted unless such privilege is explicitly granted in writing by Quantum.
Quantum reserves the right to have electroni
23 matches
Mail list logo