[ceph-users] ceph reports 10x actuall available space

2015-02-02 Thread pixelfairy
tried ceph on 3 kvm instances, each with a root 40G drive, and 6 virtio disks of 4G each. when i look at available space, instead of some number less than 72G, i get 689G, and 154G used. the journal is in a folder on the root drive. the images were made with virt-builder using ubuntu-14.04 and virs

Re: [ceph-users] ceph reports 10x actuall available space

2015-02-02 Thread pixelfairy
ceph 0.87 On Mon, Feb 2, 2015 at 7:53 PM, pixelfairy wrote: > tried ceph on 3 kvm instances, each with a root 40G drive, and 6 > virtio disks of 4G each. when i look at available space, instead of > some number less than 72G, i get 689G, and 154G used. the journal is > in a folder

Re: [ceph-users] ceph reports 10x actuall available space

2015-02-03 Thread pixelfairy
persistent might imply --live, but thats not clarified in the help, so putting both will more likely last through little version changes) virsh attach-disk $instance $disk vd$d --subdriver qcow2 --live --persistent On Mon, Feb 2, 2015 at 8:05 PM, pixelfairy wrote: > ceph 0.87 > > On Mon, Feb

Re: [ceph-users] Introducing "Learning Ceph" : The First ever Book on Ceph

2015-02-06 Thread pixelfairy
congrats! page 17, xen is spelled with an X, not Z. On Fri, Feb 6, 2015 at 1:17 AM, Karan Singh wrote: > Hello Community Members > > I am happy to introduce the first book on Ceph with the title “Learning > Ceph”. > > Me and many folks from the publishing house together with technical > reviewer

[ceph-users] parsing ceph -s and how much free space, really?

2015-02-06 Thread pixelfairy
heres output of 'ceph -s' from a kvm instance running as a ceph node. all 3 nodes are monitors, each with 6 4gig osds. mon_osd_full ratio: .611 mon_osd_nearfull ratio: .60 whats 23689MB used? is that a buffer because of mon_osd_full ratio? is there a way to query a pool for how much usable space

[ceph-users] journal placement for small office?

2015-02-06 Thread pixelfairy
3 nodes, each with 2x1TB in a raid (for /) and 6x4TB for storage. all of this will be used for block devices for kvm instances. typical office stuff. databases, file servers, internal web servers, a couple dozen thin clients. not using the object store or cephfs. i was thinking about putting the j

[ceph-users] replica or erasure coding for small office?

2015-02-06 Thread pixelfairy
is there any reliability trade off with erasure coding vs a relica size of 3? how would you get the most out of 6x4TB osds in 3 nodes? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] journal placement for small office?

2015-02-09 Thread pixelfairy
iling for smalls > ceph clusters. > > Cheers. > Eneko > > > On 06/02/15 16:48, pixelfairy wrote: >> >> 3 nodes, each with 2x1TB in a raid (for /) and 6x4TB for storage. all >> of this will be used for block devices for kvm instances. typical >> office

[ceph-users] stuck with dell perc 710p / (aka mega raid 2208?)

2015-02-10 Thread pixelfairy
Im stuck with these servers with dell perc 710p raid cards. 8 bays, looking at a pair of 256gig ssds in raid 1 for / and journals, the rest as 4tb sas we already have. since that card refuses jbod, we made them all single disk raid0, then pulled one as a test. putting it back, its state is "foreig

Re: [ceph-users] stuck with dell perc 710p / (aka mega raid 2208?)

2015-02-10 Thread pixelfairy
ence; and H810), > but I haven't started investigating failure scenarios yet... > ___ > Don Doerner > Technical Director, Advanced Projects > Quantum Corporation > > > -Original Message- > From: ceph-us

Re: [ceph-users] combined ceph roles

2015-02-11 Thread pixelfairy
i believe combining mon+osd, up to whatever magic number of monitors you want, is common in small(ish) clusters. i also have a 3 node ceph cluster at home and doing mon+osd, but not client. only rbd served to the vm hosts. no problem even with my abuses (yanking disks out, shutting down nodes etc)

[ceph-users] ceph df full allocation

2015-02-27 Thread pixelfairy
is there a way to see how much data is allocated as opposed to just what was used? for example, this 20gig image is only taking up 8gigs. id like to see a df with the full allocation of images. root@ceph1:~# rbd --image vm-101-disk-1 info rbd image 'vm-101-disk-1': size 20480 MB in 5120 object

[ceph-users] small cluster reboot fail

2015-07-29 Thread pixelfairy
have a small test cluster (vmware fusion, 3 mon+osd nodes) all run ubuntu trusty. tried rebooting all 3 nodes and this happend. root@ubuntu:~# ceph --version ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3) root@ubuntu:~# ceph health 2015-07-29 02:08:31.360516 7f5bd711a700 -1 asok(0

Re: [ceph-users] small cluster reboot fail

2015-07-29 Thread pixelfairy
disregard. i did this on a cluster of test vms and didnt bother setting different hostnames, thus confusing ceph. On Wed, Jul 29, 2015 at 2:24 AM pixelfairy wrote: > have a small test cluster (vmware fusion, 3 mon+osd nodes) all run ubuntu > trusty. tried rebooting all 3 nodes and this h

[ceph-users] rbd-fuse Transport endpoint is not connected

2015-07-29 Thread pixelfairy
client debian wheezy, server ubuntu trusty. both running ceph 0.94.2 rbd-fuse seems to work, but cant access, saying "Transport endpoint is not connected" when i try to ls the mount point. on the ceph server, (a virtual machine, as its a test cluster) root@c3:/etc/ceph# ceph -s cluster 35ef5

Re: [ceph-users] rbd-fuse Transport endpoint is not connected

2015-07-29 Thread pixelfairy
=0xf7fb max_readahead=0x0002 Error connecting to cluster: No such file or directory On Wed, Jul 29, 2015 at 5:33 AM Ilya Dryomov wrote: > On Wed, Jul 29, 2015 at 2:52 PM, pixelfairy wrote: > > client debian wheezy, server ubuntu trusty. both running ceph 0.94.2 > > > > r

Re: [ceph-users] Elastic-sized RBD planned?

2015-07-31 Thread pixelfairy
rbd is already thin provisioned. when you set its size, your setting the maximum size. its explained here, http://ceph.com/docs/master/rbd/rados-rbd-cmds/ On Thu, Jul 30, 2015 at 12:04 PM Robert LeBlanc wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > I'll take a stab at this. > >

Re: [ceph-users] Elastic-sized RBD planned?

2015-07-31 Thread pixelfairy
also, you probably want to reclaim unused space when you delete files. http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim On Fri, Jul 31, 2015 at 3:54 AM pixelfairy wrote: > rbd is already thin provisioned. when you set its size, your setting the > maximum size. its explaine

[ceph-users] update docs? just mounted a format2 rbd image with client 0.80.8 server 0.87.2

2015-07-31 Thread pixelfairy
according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering, you have two choices, format 1: you can mount with rbd kernel module format 2: you can clone just mapped and mounted a this image, rbd image 'vm-101-disk-2': size 5120 MB in 1280 objects order 22 (4096 kB objects) block_name_pre

Re: [ceph-users] Check networking first?

2015-08-01 Thread pixelfairy
thanks to this im adding regular bandwidth tests. is there, or should there be a best practices doc on ceph.com? On Sat, Aug 1, 2015 at 2:16 PM Josef Johansson wrote: > Hi, > > I did a "big-ping" test to verify the network after last major network > problem. If anyone wants to take a peek I coul

[ceph-users] readonly snapshots of live mounted rbd?

2015-08-01 Thread pixelfairy
Id like to look at a read-only copy of running virtual machines for compliance and potentially malware checks that the VMs are unaware of. the first note on http://ceph.com/docs/master/rbd/rbd-snapshot/ warns that the filesystem has to be in a consistent state. does that just mean you might get a

[ceph-users] monitor clock skew warning when date/time is the same

2016-06-07 Thread pixelfairy
test cluster running on vmware fusion. all 3 nodes are both monitor and osd, and are running opentpd $ ansible ceph1 -a "ceph -s" ceph1 | SUCCESS | rc=0 >> cluster d7d2a02c-915f-4725-8d8d-8d42fcd87242 health HEALTH_WARN clock skew detected on mon.ceph2, mon.ceph3 M

[ceph-users] striping for a small cluster

2016-06-14 Thread pixelfairy
We have a small cluster, 3mons, each which also have 6 4tb osds, and a 20gig link to the cluster (2x10gig lacp to a stacked pair of switches). well have at least replica pool (size=3) and one erasure coded pool. current plan is to have journals coexist with osds as that seems to the be safest and m

Re: [ceph-users] striping for a small cluster

2016-06-14 Thread pixelfairy
looks like well rebuild the cluster when bluestore is released anyway. thanks! On Tue, Jun 14, 2016 at 7:02 PM Christian Balzer wrote: > > Hello, > > On Wed, 15 Jun 2016 00:22:51 +0000 pixelfairy wrote: > > > We have a small cluster, 3mons, each which also have 6 4tb osds,

[ceph-users] pulled a disk out, ceph still thinks its in

2018-06-24 Thread pixelfairy
installed mimic on an empty cluster. yanked out an osd about 1/2hr ago and its still showing as in with ceph -s, ceph osd stat, and ceph osd tree. is the timeout long? hosts run ubuntu 16.04. ceph installed using ceph-ansible branch stable-3.1 the playbook didnt make the default rbd pool. ___

Re: [ceph-users] pulled a disk out, ceph still thinks its in

2018-06-24 Thread pixelfairy
e than 75% of the OSDs are in by default. > > Paul > > > Am 24.06.2018 um 23:04 schrieb pixelfairy : > > > > installed mimic on an empty cluster. yanked out an osd about 1/2hr ago > and its still showing as in with ceph -s, ceph osd stat, and ceph osd tree. > &g

Re: [ceph-users] pulled a disk out, ceph still thinks its in

2018-06-27 Thread pixelfairy
even pulling a few more out didnt show up in osd tree. had to actually try to use them. ceph tell osd.N bench works. On Sun, Jun 24, 2018 at 2:23 PM pixelfairy wrote: > 15, 5 in each node. 14 currently in. > > is there another way to know if theres a problem with one? or to make the &g