tried ceph on 3 kvm instances, each with a root 40G drive, and 6
virtio disks of 4G each. when i look at available space, instead of
some number less than 72G, i get 689G, and 154G used. the journal is
in a folder on the root drive. the images were made with virt-builder
using ubuntu-14.04 and virs
ceph 0.87
On Mon, Feb 2, 2015 at 7:53 PM, pixelfairy wrote:
> tried ceph on 3 kvm instances, each with a root 40G drive, and 6
> virtio disks of 4G each. when i look at available space, instead of
> some number less than 72G, i get 689G, and 154G used. the journal is
> in a folder
persistent
might imply --live, but thats not clarified in the help, so putting
both will more likely last through little version changes)
virsh attach-disk $instance $disk vd$d --subdriver qcow2 --live --persistent
On Mon, Feb 2, 2015 at 8:05 PM, pixelfairy wrote:
> ceph 0.87
>
> On Mon, Feb
congrats!
page 17, xen is spelled with an X, not Z.
On Fri, Feb 6, 2015 at 1:17 AM, Karan Singh wrote:
> Hello Community Members
>
> I am happy to introduce the first book on Ceph with the title “Learning
> Ceph”.
>
> Me and many folks from the publishing house together with technical
> reviewer
heres output of 'ceph -s' from a kvm instance running as a ceph node.
all 3 nodes are monitors, each with 6 4gig osds.
mon_osd_full ratio: .611
mon_osd_nearfull ratio: .60
whats 23689MB used? is that a buffer because of mon_osd_full ratio?
is there a way to query a pool for how much usable space
3 nodes, each with 2x1TB in a raid (for /) and 6x4TB for storage. all
of this will be used for block devices for kvm instances. typical
office stuff. databases, file servers, internal web servers, a couple
dozen thin clients. not using the object store or cephfs.
i was thinking about putting the j
is there any reliability trade off with erasure coding vs a relica size of 3?
how would you get the most out of 6x4TB osds in 3 nodes?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
iling for smalls
> ceph clusters.
>
> Cheers.
> Eneko
>
>
> On 06/02/15 16:48, pixelfairy wrote:
>>
>> 3 nodes, each with 2x1TB in a raid (for /) and 6x4TB for storage. all
>> of this will be used for block devices for kvm instances. typical
>> office
Im stuck with these servers with dell perc 710p raid cards. 8 bays,
looking at a pair of 256gig ssds in raid 1 for / and journals, the
rest as 4tb sas we already have.
since that card refuses jbod, we made them all single disk raid0, then
pulled one as a test. putting it back, its state is "foreig
ence; and H810),
> but I haven't started investigating failure scenarios yet...
> ___
> Don Doerner
> Technical Director, Advanced Projects
> Quantum Corporation
>
>
> -Original Message-
> From: ceph-us
i believe combining mon+osd, up to whatever magic number of monitors you
want, is common in small(ish) clusters. i also have a 3 node ceph cluster
at home and doing mon+osd, but not client. only rbd served to the vm hosts.
no problem even with my abuses (yanking disks out, shutting down nodes etc)
is there a way to see how much data is allocated as opposed to just
what was used? for example, this 20gig image is only taking up 8gigs.
id like to see a df with the full allocation of images.
root@ceph1:~# rbd --image vm-101-disk-1 info
rbd image 'vm-101-disk-1':
size 20480 MB in 5120 object
have a small test cluster (vmware fusion, 3 mon+osd nodes) all run ubuntu
trusty. tried rebooting all 3 nodes and this happend.
root@ubuntu:~# ceph --version ceph version 0.94.2
(5fb85614ca8f354284c713a2f9c610860720bbf3)
root@ubuntu:~# ceph health 2015-07-29 02:08:31.360516 7f5bd711a700 -1
asok(0
disregard. i did this on a cluster of test vms and didnt bother setting
different hostnames, thus confusing ceph.
On Wed, Jul 29, 2015 at 2:24 AM pixelfairy wrote:
> have a small test cluster (vmware fusion, 3 mon+osd nodes) all run ubuntu
> trusty. tried rebooting all 3 nodes and this h
client debian wheezy, server ubuntu trusty. both running ceph 0.94.2
rbd-fuse seems to work, but cant access, saying "Transport endpoint is
not connected" when i try to ls the mount point.
on the ceph server, (a virtual machine, as its a test cluster)
root@c3:/etc/ceph# ceph -s
cluster 35ef5
=0xf7fb max_readahead=0x0002 Error connecting to
cluster: No such file or directory
On Wed, Jul 29, 2015 at 5:33 AM Ilya Dryomov wrote:
> On Wed, Jul 29, 2015 at 2:52 PM, pixelfairy wrote:
> > client debian wheezy, server ubuntu trusty. both running ceph 0.94.2
> >
> > r
rbd is already thin provisioned. when you set its size, your setting the
maximum size. its explained here,
http://ceph.com/docs/master/rbd/rados-rbd-cmds/
On Thu, Jul 30, 2015 at 12:04 PM Robert LeBlanc
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> I'll take a stab at this.
>
>
also, you probably want to reclaim unused space when you delete files.
http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim
On Fri, Jul 31, 2015 at 3:54 AM pixelfairy wrote:
> rbd is already thin provisioned. when you set its size, your setting the
> maximum size. its explaine
according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
you have two choices,
format 1: you can mount with rbd kernel module
format 2: you can clone
just mapped and mounted a this image,
rbd image 'vm-101-disk-2': size 5120 MB in 1280 objects order 22 (4096 kB
objects) block_name_pre
thanks to this im adding regular bandwidth tests. is there, or should there
be a best practices doc on ceph.com?
On Sat, Aug 1, 2015 at 2:16 PM Josef Johansson wrote:
> Hi,
>
> I did a "big-ping" test to verify the network after last major network
> problem. If anyone wants to take a peek I coul
Id like to look at a read-only copy of running virtual machines for
compliance and potentially malware checks that the VMs are unaware of.
the first note on http://ceph.com/docs/master/rbd/rbd-snapshot/ warns that
the filesystem has to be in a consistent state. does that just mean you
might get a
test cluster running on vmware fusion. all 3 nodes are both monitor and
osd, and are running opentpd
$ ansible ceph1 -a "ceph -s"
ceph1 | SUCCESS | rc=0 >>
cluster d7d2a02c-915f-4725-8d8d-8d42fcd87242
health HEALTH_WARN
clock skew detected on mon.ceph2, mon.ceph3
M
We have a small cluster, 3mons, each which also have 6 4tb osds, and a
20gig link to the cluster (2x10gig lacp to a stacked pair of switches).
well have at least replica pool (size=3) and one erasure coded pool.
current plan is to have journals coexist with osds as that seems to the be
safest and m
looks like well rebuild the cluster when bluestore is released anyway.
thanks!
On Tue, Jun 14, 2016 at 7:02 PM Christian Balzer wrote:
>
> Hello,
>
> On Wed, 15 Jun 2016 00:22:51 +0000 pixelfairy wrote:
>
> > We have a small cluster, 3mons, each which also have 6 4tb osds,
installed mimic on an empty cluster. yanked out an osd about 1/2hr ago and
its still showing as in with ceph -s, ceph osd stat, and ceph osd tree.
is the timeout long?
hosts run ubuntu 16.04. ceph installed using ceph-ansible branch stable-3.1
the playbook didnt make the default rbd pool.
___
e than 75% of the OSDs are in by default.
>
> Paul
>
> > Am 24.06.2018 um 23:04 schrieb pixelfairy :
> >
> > installed mimic on an empty cluster. yanked out an osd about 1/2hr ago
> and its still showing as in with ceph -s, ceph osd stat, and ceph osd tree.
> &g
even pulling a few more out didnt show up in osd tree. had to actually try
to use them. ceph tell osd.N bench works.
On Sun, Jun 24, 2018 at 2:23 PM pixelfairy wrote:
> 15, 5 in each node. 14 currently in.
>
> is there another way to know if theres a problem with one? or to make the
&g
27 matches
Mail list logo