Re: [ceph-users] journal on ssd

2013-08-07 Thread Joao Pedras
Disregard the udev comment above. Copy/paste mistake. :) On Wed, Aug 7, 2013 at 4:43 PM, Joao Pedras wrote: > The journal devices entries beyond the 2nd (ie. /dev/sdg2) are not created > under /dev. Basically doing the following addresses the issue: > > --- /usr/sbin/ceph-disk 2013-07-25 00:55:

Re: [ceph-users] journal on ssd

2013-08-07 Thread Joao Pedras
The journal devices entries beyond the 2nd (ie. /dev/sdg2) are not created under /dev. Basically doing the following addresses the issue: --- /usr/sbin/ceph-disk 2013-07-25 00:55:41.0 -0700 +++ /root/ceph-disk 2013-08-07 15:54:17.538542684 -0700 @@ -857,6 +857,14 @@ 's

Re: [ceph-users] Why is my mon store.db is 220GB?

2013-08-07 Thread Jeppesen, Nelson
Joao, Have you had a chance to look at my monitor issues? I Ran ''ceph-mon -i FOO -compact' last week but it did not improve disk usage. Let me know if there's anything else I dig up. The monitor still at 0.67-rc2 with the OSDs at .0.61.7. On 08/02/2013 12:15 AM, Jeppesen, Nelson wrote: > Th

Re: [ceph-users] journal on ssd

2013-08-07 Thread Joao Pedras
Some more info about this... The subject should have been journal on another device. The issue also occurs if using another disk to hold the journal. If doing something like 'ceph-deploy node:sda:sdk' a subsequent run like 'ceph-deploy:sdb:sdk' will show the error regarding sdb's osd. If doing 'ce

Re: [ceph-users] journal on ssd

2013-08-07 Thread Joao Pedras
Hello Tren, It is indeed: $> sestatus SELinux status: disabled Thanks, On Wed, Aug 7, 2013 at 9:33 AM, Tren Blackburn wrote: > On Tue, Aug 6, 2013 at 11:14 AM, Joao Pedras wrote: > >> Greetings all. >> >> I am installing a test cluster using one ssd (/dev/sdg) to hold the >>

Re: [ceph-users] minimum object size in ceph

2013-08-07 Thread Nulik Nol
thanks Dan, i meant like PRIMARY KEY in a RDBMS, or Key for NoSQL (key-value pair) database to perform put() get() operations. Well, if it is string then it's ok, I can print binary keys in HEX or uuencode or something like that. Is there a limit on maximum string length for object name? Regards N

Re: [ceph-users] journal on ssd

2013-08-07 Thread Tren Blackburn
On Tue, Aug 6, 2013 at 11:14 AM, Joao Pedras wrote: > Greetings all. > > I am installing a test cluster using one ssd (/dev/sdg) to hold the > journals. Ceph's version is 0.61.7 and I am using ceph-deploy obtained from > ceph's git yesterday. This is on RHEL6.4, fresh install. > > When preparing

Re: [ceph-users] fuse or kernel fs?

2013-08-07 Thread Sage Weil
On Wed, 7 Aug 2013, James Harper wrote: > Are the fuse and kernel filesystem drivers about the same or is one > definitely better than the other? Both are actively maintained. I would say the kernel one is faster and a bit more robust, but it is also necessary to run a recent kernel to get all

[ceph-users] ceph-deploy with partition, lvm or dm-crypt

2013-08-07 Thread Pierre BLONDEAU
Hello, I read in the documentation that it is more recommended to use ceph-deploy the configuration files. But I can not: Use a partition as OSD (and not a full hard drive) Give a logical volume (LVM) as log (SSD hardware raid 1) Using dm-crypt My version of Ceph-depoly is 1.0-1 on http://

Re: [ceph-users] How to start/stop ceph daemons separately?

2013-08-07 Thread Wido den Hollander
Op 7 aug. 2013 om 10:20 heeft "Da Chun" het volgende geschreven: > On Ubuntu, we can start/stop ceph daemons separately as below: > start ceph-mon id=ceph0 > stop ceph-mon id=ceph0 > > How to do this on Centos or rhel? Thanks! I think this should work: $ service ceph stop mon.ceph0 $ servi

[ceph-users] Ceph pgs stuck unclean

2013-08-07 Thread Howarth, Chris
Hi, One of our OSD disks failed on a cluster and I replaced it, but when it failed it did not completely recover and I have a number of pgs which are stuck unclean: # ceph health detail HEALTH_WARN 7 pgs stuck unclean pg 3.5a is stuck unclean for 335339.172516, current state active, last act

[ceph-users] How to start/stop ceph daemons separately?

2013-08-07 Thread Da Chun
On Ubuntu, we can start/stop ceph daemons separately as below: start ceph-mon id=ceph0 stop ceph-mon id=ceph0 How to do this on Centos or rhel? Thanks!___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ce