tly weighted in CRUSH by their size? If not, you
> want to apply that there and return all of the monitor override
> weights to 1.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Tue, Feb 25, 2014 at 9:19 AM, Gautam Saxena
> wrote:
> &g
ped+wait_backfill
5 active+remapped+wait_backfill+backfill_toofull
153 active+remapped
10 active+remapped+backfilling
client io 4369 kB/s rd, 64377 B/s wr, 26 op/s
==
On Sun, Feb 23, 2014 at 8:09 PM, Gautam Saxena wrote:
> I h
I have 19 pages that are stuck unclean (see below result of ceph -s). This
occurred after I executed a "ceph osd reweight-by-utilization 108" to
resolve problems with "backfill_too_full" messages, which I believe
occurred because my OSDs sizes vary significantly in size (from a low of
600GB to a hi
If one node which happens to have a single raid 0 hardisk is "slow", would
that impact the whole ceph cluster? That is, when VMs interact with the rbd
pool to read and write data, would the kvm client "wait" for that slow
hardisk/node to return with the requested data, thus making that slow
hardisk
I'm trying to maximize emphemeral Windows 7 32-bit performance with CEPH's
RBD as back-end storage engine. (I'm not worried about data loss, as these
VMs are all ephemeral, but I am worried about performance and
responsiveness of the VMs.) My questions are:
1) Are there any recommendations or bes
When booting an image from Openstack in which CEPH is the back-end for both
volumes and images, I'm noticing that it takes about ~10 minutes during the
"spawning" phase -- I believe Openstack is making a fully copy of the 30 GB
Windows image. Shouldn't it be a "copy-on-write" image and therefore ta
If I've installed ceph (and Openstack) with ceph authentication enabled,
but I *now* want to disable cephx authentication using the techniques
described in the ceph documentation (
http://ceph.com/docs/master/rados/operations/authentication/#disable-cephx),
do I need to reconfigure anything on Open
ownloading Packages:
ceph-0.72.1-0.el6.x86_64: failure: ceph-0.72.1-0.el6.x86_64.rpm from
ceph: [Errno 256] No more mirrors to try.
On Tue, Dec 3, 2013 at 4:07 PM, Gautam Saxena wrote:
> In trying to download the RPM packages for CEPH, the yum commands timed
> out. I then tried jus
In trying to download the RPM packages for CEPH, the yum commands timed
out. I then tried just downloading them via Chrome browser (
http://ceph.com/rpm-emperor/el6/x86_64/ceph-0.72.1-0.el6.x86_64.rpm) and it
only downloaded 64KB. (The website www.ceph.com is slow too)
_
I've got ceph up and running on a 3-node centos 6.4 cluster. However, after
I
a) set the cluster to nout as follows: ceph osd set noout
b) rebooted 1 node
c) logged into that 1 node, I tried to do: service ceph start osd.12
but it returned with error message:
/etc/init.d/ceph: osd.12 not found (
We need to install the OS on the 3TB harddisks that come with our Dell
servers. (After many attempts, I've discovered that Dell servers won't
allow attaching an external harddisk via the PCIe slot. (I've tried
everything). )
But, must I therefore sacrifice two hard disks (RAID-1) for the OS? I
do
Unless you're giving blood.”
>
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien@enovance.com
> Address : 10, rue de la Victoire - 75009 Paris
> Web : www.enovance.com - Twitter : @enovance
>
> On 14 Nov 2013, at 17:08, Gautam Saxena wro
I'm also getting similar problems, although in my installation, even though
there are errors, it seems to finish. (I'm using centos 6.4 and emperor
release and I added the "defaults http and https" to the sudoers file for
the ia1 node, though I didn't do so for the the ia2 and ia3 nodes.) So is
eve
or
> allocation, and ethernet bonding would be.
>
>
>
> Sent from my iPad
>
> On Nov 19, 2013, at 8:12 PM, Gautam Saxena wrote:
>
> 1a) The Ceph documentation on Openstack integration make a big (and
> valuable) point that cloning images should be instantaneous/quick du
t comes with ceph-deploy, and that
each server typically has 6 to 8 disks.) So a 1 TB vm, for example, would
be split 24/68 on server 1; 16/68 on server 2; 12/68 on server 3; 4/68 on
server 4; and 4/68 on servers 5 and 6?
--
*Gautam Saxena *
President & CEO
Integrated Analysis Inc.
Making
HA into NFSCEPH yet, it
> should be doable by drdb-ing the NFS data directory, or any other
> techniques that people use for redundant NFS servers.
>
> - WP
>
>
> On Fri, Nov 15, 2013 at 10:26 PM, Gautam Saxena wrote:
>
>> Yip,
>>
>> I went to the link. Where can th
13 at 1:57 AM, YIP Wai Peng wrote:
> On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena wrote:
>
>>
>> 1) nfs over rbd (
>> http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
>>
>
> We are now running this - basically an intermediate/gateway node that
>
I've recently accepted the fact CEPH-FS is not stable enough for production
based on 1) recent discussion this week with Inktank engineers, 2)
discovery that the documentation now explicitly states that all over the
place (http://eu.ceph.com/docs/wip-3060/cephfs/) and 3) a reading of the
recent bug
able. The command ‘ceph mds set allow_snaps’ will enable
them." So, should I assume that we can't do incremental file-system
snapshots in a stable fashion until further notice?
-Sidharta
--
*Gautam Saxena *
President & CEO
Integrated Analysis Inc.
Making Sense of Data.™
Biomar
19 matches
Mail list logo