Hi Sebastien,
Thank you for following up on this. I have resolved the issue by
zapping all the disks and switching to the LVM scenario. I will open
an issue on GitHub if I ever run into the same problem again later.
Thanks and Cheers,
Cody
On Mon, Jan 21, 2019 at 4:23 AM Sebastien Han wrote
://pasted.tech/pastes/af4e0b3b76c08e2f5790c89123a9fcb7ac7f726e
[2] https://pasted.tech/pastes/48551abd7d07cd647c7d6c585bb496af80669290
Any suggestions would be much appreciated.
Thank you very much!
Regards,
Cody
___
ceph-users mailing list
ceph-users
ssd/hdd, separating public and cluster traffics, etc) into
my next round PoC.
Thank you very much!
Best regards,
Cody
On Tue, Nov 27, 2018 at 6:31 AM Vitaliy Filippov wrote:
>
> > CPU: 2 x E5-2603 @1.8GHz
> > RAM: 16GB
> > Network: 1G port shared for Ceph public and cluster
message:
# rbd map image01 --pool testbench --name client.admin
rbd: failed to add secret 'client.admin' to kernel
Any suggestions on the above error and/or debugging would be greatly
appreciated!
Thank you very much to all.
Cody
[1]
https://access.redhat.com/documenta
t place and the result of running "ansible --version"
from the /root/ceph-ansible directory initially showed the first two
paths only.
Thank you very much.
Best regards,
Cody
On Tue, Oct 23, 2018 at 9:51 AM Mark Johnston wrote:
>
> On Mon, 2018-10-22 at 20:05 -0400, Cody wrote:
vm_volumes
^ here
Affected environment:
ceph-ansible stable 3.1 + ansible 2.4.2
ceph-ansible stable 3.2 + ansible 2.6.6
What could be wrong in my case?
Thank you to all.
Regards,
Cody
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
That was clearly explained. Thank you so much!
Best regards,
Cody
On Sat, Oct 20, 2018 at 1:02 PM Maged Mokhtar wrote:
>
>
>
> On 20/10/18 05:28, Cody wrote:
> > Hi folks,
> >
> > I have a rookie question. Does the number of the buckets chosen as the
> > fai
in the CRUSH
hierarchy? Or would it continue to iterate down the tree and
eventually would work as long as there are 6 or more OSDs?
Thank you very much.
Best regards,
Cody
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
the above three rules correct regarding the usage of 'step
choose|chooseleaf'?
Thank you!
Regards,
Cody
Here are related references from the mailing list:
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-June/010370.html
[2] http://lists.ceph.com/pipermail/ceph-users-ceph.co
So, is it okay to say that compared to the 'firstn' mode, the 'indep'
mode may have the least impact on a cluster in an event of OSD
failure? Could I use 'indep' for replica pool as well?
Thank you!
Regards,
Cody
On Wed, Aug 22, 2018 at 7:12 PM Gregory Farnum wrot
' when it comes to define a failure
domain?
Thank you very much.
Regards,
Cody
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-June/010370.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Konstantin,
I could only dream of reading this answer! Thank you so much!!!
Regards,
Cody
On Tue, Aug 21, 2018 at 8:50 AM Konstantin Shalygin wrote:
>
> On 08/20/2018 08:15 PM, Cody wrote:
>
> Hi Konstantin,
>
> Thank you for looking into my question.
>
> I was try
rack
step emit
}
rule hdd {
id 2
type replicated
min_size 2
max_size 11
step take a class hdd
step chooseleaf firstn 0 type rack
step emit
}
Are the two rules correct?
Regards,
Cody
On Sun, Aug 19, 2018 at 11:55 PM Konstantin Shalygin wrot
Hi Konstantin,
Thank you for the reply.
Would the settings in the 'ceph_conf_overrides' in the all.yml get
applied to the partitioning process during deployment?
Regards,
Cody
On Sun, Aug 19, 2018 at 9:58 PM Konstantin Shalygin wrote:
>
> Hi everyone,
>
> If I choose to u
://docs.ceph.com/ceph-ansible/master/osds/scenarios.html#non-collocated
[2]
https://github.com/ceph/ceph-ansible/blob/stable-3.0/group_vars/all.yml.sample
[3]
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing
Regards,
Cody
On Sun, Aug 19, 2018 at 7:59 PM Benjamin
Hi everyone,
If I choose to use the "non-collocated" scenario and Bluestore in
Ceph-Ansible, how could I define the size of the partitions on a
dedicated device used for the DB and WAL by multiple OSDs?
Thank you very much.
Regards,
Cody
__
a3-1 weight 3.0
item a3-2 weight 3.0
}
Sorry about that!
>On Sat, Aug 18, 2018 at 9:43 PM Cody wrote:
>
> Hi everyone,
>
> I am new to Ceph and trying to test out my understanding on the CRUSH
> map. Attached is a hypothetical cluster diagram with 3 racks. On each
>
tation several times, but still cannot get
it.
Any helps would be greatly appreciated!
Regards,
Cody
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Satish and Alfredo for your replies. That really helped!
On Fri, Aug 17, 2018 at 8:04 AM Alfredo Deza wrote:
>
> On Thu, Aug 16, 2018 at 4:44 PM, Cody wrote:
> > Hi everyone,
> >
> > As a newbie, I have some questions about using SSD as the Bluestore
> >
out the max number of OSDs a single SSD
can serve for journaling? Or any rule of thumb?
3. What is the procedure to replace an SSD journal device used for
DB+WAL in a hot cluster?
Thank you all very much!
Cody
___
ceph-users mailing list
ceph-users
20 matches
Mail list logo