Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-10 Thread Craig Lewis
On 5/7/14 15:33 , Dimitri Maziuk wrote: On 05/07/2014 04:11 PM, Craig Lewis wrote: On 5/7/14 13:40 , Sergey Malinin wrote: Check dmesg and SMART data on both nodes. This behaviour is similar to failing hdd. It does sound like a failing disk... but there's nothing in dmesg, and smartmontools

[ceph-users] qemu-img break cloudstack snapshot

2014-05-10 Thread Andrija Panic
Hi, just to share my issue with qemu-img provided by CEPH (RedHat made a problem, not CEPH): newest qemu-img - /qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64.rpm was built from RHEL 6.5 source code, where Redhat removed the "-s" paramter, so snapshooting in CloudStack up to 4.2.1 does not work, I gu

Re: [ceph-users] Bulk storage use case

2014-05-10 Thread Craig Lewis
On 5/10/14 12:43 , Cédric Lemarchand wrote: Hi Craig, Thanks, I really appreciate the well detailed response. I carefully note your advices, specifically about the CPU starvation scenario, which as you said sounds scary. About IO, datas will be very resilient, in case of crash, loosing not

Re: [ceph-users] Bulk storage use case

2014-05-10 Thread Cédric Lemarchand
Hi Craig, Thanks, I really appreciate the well detailed response. I carefully note your advices, specifically about the CPU starvation scenario, which as you said sounds scary. About IO, datas will be very resilient, in case of crash, loosing not fully written objects will not be a problem (th

Re: [ceph-users] Replace journals disk

2014-05-10 Thread Indra Pramana
Hi Gandalf, I tried to dump journal partition scheme from the old SSD: sgdisk --backup=/tmp/journal_table /dev/sdg and then restore the journal partition scheme to the new SSD after it's replaced: sgdisk --restore-backup=/tmp/journal_table /dev/sdg and it doesn't work. :( parted -l doesn't sho

Re: [ceph-users] Migrate whole clusters

2014-05-10 Thread Andrey Korolyov
Anyway replacing set of monitors means downtime for every client, so I`m in doubt if 'no outage' word is still applicable there. On Fri, May 9, 2014 at 9:46 PM, Kyle Bader wrote: >> Let's assume a test cluster up and running with real data on it. >> Which is the best way to migrate everything to

Re: [ceph-users] NFS over CEPH - best practice

2014-05-10 Thread Leen Besselink
On Fri, May 09, 2014 at 12:37:57PM +0100, Andrei Mikhailovsky wrote: > Ideally I would like to have a setup with 2+ iscsi servers, so that I can > perform maintenance if necessary without shutting down the vms running on the > servers. I guess multipathing is what I need. > > Also I will need t