practice for a CEPH cluster with
3 monitor nodes, and 3 OSDs with 1 800GB NVMe drive and 12 6TB drives.
My goal is reliable/somewhat fast performance.
Any help would be greatly appreciated!
Tim Gipson
Systems Engineer
[http://www.ena.com/signature/enaemaillogo.gif]<http://www.ena.com/>
because their NVMe drives were on different nodes. That is the case for
our gear as well.
Also, my gear is already in house so I’ve got what I’ve got to work with at
this point, for good for ill.
Tim Gipson
On 6/16/16, 7:47 PM, "Christian Balzer" wrote:
Hello,
On Thu, 16 Jun 201
regular replicated pool.
Does anyone have any experience setting up a pool this way and can you give me
some help or direction, or point me toward some documentation that goes over
the math behind this sort of pool setup?
Any help would be greatly appreciated!
Thanks,
Tim Gipson
Systems
an entire host without
losing the cluster. At this point I’m not sure that’s possible without
bringing in more hosts.
Thanks for the help!
Tim Gipson
On 11/12/17, 5:14 PM, "Christian Wuerdig" wrote:
I might be wrong, but from memory I think you can use
http://ceph.com/pgcal
step emit
}
# end crush map
Thanks again for all the help!
Tim Gipson
Systems Engineer
On 11/12/17, 10:57 PM, "Christian Wuerdig" wrote:
Well, as stated in the other email I think in the EC scenario you can
set size=k+m for the pgcalc tool. If you want 10+2 then in
uot;type": "osd"
},
{
"op": "emit"
}
]
}
Here is my EC profile:
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=6
m=2
plugin=jerasure
technique=reed_sol_van
w=8
Any direction or help would be greatly appreciated.
Thanks,
Tim Gipson
Systems Engineer
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is anyone else experiencing issues when they try to run a “ceph-deploy install”
command when it gets to the rpm import of
https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc ?
I also tried to curl the url with no luck. I get a 504 Gateway time-out error
in cephy-deploy.
Tim G.
S