Hi Oliver,
can you post here on the mailing list the steps taken?
>From the IRC logs you said "if I use "choose osd", it works -- but
"chooseleaf ... host" doesn't work"
So, to have data balanced between 2 rooms, is the rule "step chooseleaf
firstn 0 type room" correct?
Thanks
--
Marco
20
Hi all,
since this list has many members working with what we now call "Big
Data" infrastructures and there's also a healthy amount of scientific
members, I figured the following CfP might be of interest. I am in the
PC and the program chair is my former PhD advisor, so it's not
completely unrelat
Hi,
yes, the «step chooseleaf firstn 0 type room» is correct. My problem was
that «crushtool --test» was reporting «bad mapping» PG, and some tests
on an *empty* pool gave me stucked PG in «active+remapped».
After some tests and reading, I saw that :
« For large clusters, some small percentages o
Hello,
We're planning 3 hosts, 12 HDDs in each host. Which is the better? If we
set up 1 OSD - 1 HDD structure or create hardware RAID-6 all of 12 HDDs and
only one OSD uses the whole disk space in one host?
Thank you,
Mike
___
ceph-users mailing list
c
read
https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault
Dima
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm using v 0.56.
If I dont wanna specify in the configuration file the osd mkfs osd mount and
devs setings, are there gonna be a default values ?? ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
Waed,
These are optional settings. If you specify them, Ceph will create the
file system for you. If you are just performing a local install for
testing purposes, you can omit the values. You'd need to have a 'devs'
setting for each OSD in order for mkcephfs to build a file system for
you. The def
Friend,
My name is Anthony Elias; I invite you to partner with me a rewarding business
venture, Details on your indication of interest. My email is:
telias...@yahoo.com .
Regards,
Anthony
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://li
Hi,
We need to store ~500M of small files (<1MB) and we were looking to
RadosGW solution. We expect about 20 ops/sec (read+write). I'm trying to
understand how Monitoring nodes store Crush maps and what are the
limitations.
For instance, is there any recommended max number of objects per
Mo
Mark,
I would just like to add, we too are seeing the same behavior with
QEMU/KVM/RBD. Maybe it is a common symptom of high IO with this setup.
Regards,
Andrew
On 3/8/2013 12:46 AM, Mark Nelson wrote:
On 03/07/2013 05:10 AM, Wolfgang Hennerbichler wrote:
On 03/06/2013 02:31 PM, M
10 matches
Mail list logo