Re: [ceph-users] pages stuck unclean (but remapped)

2014-02-26 Thread Gautam Saxena
tly weighted in CRUSH by their size? If not, you > want to apply that there and return all of the monitor override > weights to 1. > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Tue, Feb 25, 2014 at 9:19 AM, Gautam Saxena > wrote: > &g

Re: [ceph-users] pages stuck unclean (but remapped)

2014-02-25 Thread Gautam Saxena
ped+wait_backfill 5 active+remapped+wait_backfill+backfill_toofull 153 active+remapped 10 active+remapped+backfilling client io 4369 kB/s rd, 64377 B/s wr, 26 op/s == On Sun, Feb 23, 2014 at 8:09 PM, Gautam Saxena wrote: > I h

[ceph-users] pages stuck unclean (but remapped)

2014-02-23 Thread Gautam Saxena
I have 19 pages that are stuck unclean (see below result of ceph -s). This occurred after I executed a "ceph osd reweight-by-utilization 108" to resolve problems with "backfill_too_full" messages, which I believe occurred because my OSDs sizes vary significantly in size (from a low of 600GB to a hi

[ceph-users] can one slow hardisk slow whole cluster down?

2014-01-28 Thread Gautam Saxena
If one node which happens to have a single raid 0 hardisk is "slow", would that impact the whole ceph cluster? That is, when VMs interact with the rbd pool to read and write data, would the kvm client "wait" for that slow hardisk/node to return with the requested data, thus making that slow hardisk

[ceph-users] maximizing VM performance (on CEPH)

2014-01-18 Thread Gautam Saxena
I'm trying to maximize emphemeral Windows 7 32-bit performance with CEPH's RBD as back-end storage engine. (I'm not worried about data loss, as these VMs are all ephemeral, but I am worried about performance and responsiveness of the VMs.) My questions are: 1) Are there any recommendations or bes

[ceph-users] openstack -- does it use "copy-on-write"?

2014-01-08 Thread Gautam Saxena
When booting an image from Openstack in which CEPH is the back-end for both volumes and images, I'm noticing that it takes about ~10 minutes during the "spawning" phase -- I believe Openstack is making a fully copy of the 30 GB Windows image. Shouldn't it be a "copy-on-write" image and therefore ta

[ceph-users] disabling cephx authentication & openstack

2014-01-08 Thread Gautam Saxena
If I've installed ceph (and Openstack) with ceph authentication enabled, but I *now* want to disable cephx authentication using the techniques described in the ceph documentation ( http://ceph.com/docs/master/rados/operations/authentication/#disable-cephx), do I need to reconfigure anything on Open

Re: [ceph-users] ceph website for rpm pacakges is down?

2013-12-03 Thread Gautam Saxena
ownloading Packages: ceph-0.72.1-0.el6.x86_64: failure: ceph-0.72.1-0.el6.x86_64.rpm from ceph: [Errno 256] No more mirrors to try. On Tue, Dec 3, 2013 at 4:07 PM, Gautam Saxena wrote: > In trying to download the RPM packages for CEPH, the yum commands timed > out. I then tried jus

[ceph-users] ceph website for rpm pacakges is down?

2013-12-03 Thread Gautam Saxena
In trying to download the RPM packages for CEPH, the yum commands timed out. I then tried just downloading them via Chrome browser ( http://ceph.com/rpm-emperor/el6/x86_64/ceph-0.72.1-0.el6.x86_64.rpm) and it only downloaded 64KB. (The website www.ceph.com is slow too) _

[ceph-users] does ceph-deploy adding of osds automatically update ceph.conf? It seems no...

2013-11-29 Thread Gautam Saxena
I've got ceph up and running on a 3-node centos 6.4 cluster. However, after I a) set the cluster to nout as follows: ceph osd set noout b) rebooted 1 node c) logged into that 1 node, I tried to do: service ceph start osd.12 but it returned with error message: /etc/init.d/ceph: osd.12 not found (

[ceph-users] installing OS on software RAID

2013-11-25 Thread Gautam Saxena
We need to install the OS on the 3TB harddisks that come with our Dell servers. (After many attempts, I've discovered that Dell servers won't allow attaching an external harddisk via the PCIe slot. (I've tried everything). ) But, must I therefore sacrifice two hard disks (RAID-1) for the OS? I do

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-25 Thread Gautam Saxena
Unless you're giving blood.” > > Phone: +33 (0)1 49 70 99 72 > Mail: sebastien@enovance.com > Address : 10, rue de la Victoire - 75009 Paris > Web : www.enovance.com - Twitter : @enovance > > On 14 Nov 2013, at 17:08, Gautam Saxena wro

Re: [ceph-users] ceph-deploy problems on CentOS-6.4

2013-11-22 Thread Gautam Saxena
I'm also getting similar problems, although in my installation, even though there are errors, it seems to finish. (I'm using centos 6.4 and emperor release and I added the "defaults http and https" to the sudoers file for the ia1 node, though I didn't do so for the the ia2 and ia3 nodes.) So is eve

Re: [ceph-users] misc performance tuning queries (related to OpenStack in particular)

2013-11-19 Thread Gautam Saxena
or > allocation, and ethernet bonding would be. > > > > Sent from my iPad > > On Nov 19, 2013, at 8:12 PM, Gautam Saxena wrote: > > 1a) The Ceph documentation on Openstack integration make a big (and > valuable) point that cloning images should be instantaneous/quick du

[ceph-users] misc performance tuning queries (related to OpenStack in particular)

2013-11-19 Thread Gautam Saxena
t comes with ceph-deploy, and that each server typically has 6 to 8 disks.) So a 1 TB vm, for example, would be split 24/68 on server 1; 16/68 on server 2; 12/68 on server 3; 4/68 on server 4; and 4/68 on servers 5 and 6? -- *Gautam Saxena * President & CEO Integrated Analysis Inc. Making

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-19 Thread Gautam Saxena
HA into NFSCEPH yet, it > should be doable by drdb-ing the NFS data directory, or any other > techniques that people use for redundant NFS servers. > > - WP > > > On Fri, Nov 15, 2013 at 10:26 PM, Gautam Saxena wrote: > >> Yip, >> >> I went to the link. Where can th

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-15 Thread Gautam Saxena
13 at 1:57 AM, YIP Wai Peng wrote: > On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena wrote: > >> >> 1) nfs over rbd ( >> http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/) >> > > We are now running this - basically an intermediate/gateway node that >

[ceph-users] alternative approaches to CEPH-FS

2013-11-14 Thread Gautam Saxena
I've recently accepted the fact CEPH-FS is not stable enough for production based on 1) recent discussion this week with Inktank engineers, 2) discovery that the documentation now explicitly states that all over the place (http://eu.ceph.com/docs/wip-3060/cephfs/) and 3) a reading of the recent bug

[ceph-users] deployment architecture practices / new ideas?

2013-11-06 Thread Gautam Saxena
able. The command ‘ceph mds set allow_snaps’ will enable them." So, should I assume that we can't do incremental file-system snapshots in a stable fashion until further notice? -Sidharta -- *Gautam Saxena * President & CEO Integrated Analysis Inc. Making Sense of Data.™ Biomar