Re: [ceph-users] Ceph vs Hardware RAID: No battery backed cache

2015-02-10 Thread Mark Kirkwood
On 10/02/15 20:40, Thomas Güttler wrote: Hi, does the lack of a battery backed cache in Ceph introduce any disadvantages? We use PostgreSQL and our servers have UPS. But I want to survive a power outage, although it is unlikely. But "hope is not an option ..." You can certainly make use of

[ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Having problem with my fresh non-healthy cluster, my cluster status summary shows this: ceph@ceph-node1:~$ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_num 128 > pgp_num 64 m

[ceph-users] stuck with dell perc 710p / (aka mega raid 2208?)

2015-02-10 Thread pixelfairy
Im stuck with these servers with dell perc 710p raid cards. 8 bays, looking at a pair of 256gig ssds in raid 1 for / and journals, the rest as 4tb sas we already have. since that card refuses jbod, we made them all single disk raid0, then pulled one as a test. putting it back, its state is "foreig

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
> On Feb 10, 2015, at 12:37 PM, B L wrote: > > Hi Vickie, > > Thanks for your reply! > > You can find the dump in this link: > > https://gist.github.com/anonymous/706d4a1ec81c93fd1eca > > > Thanks! > B. > > >> On Feb 10, 2015, at 12

Re: [ceph-users] ISCSI LIO hang after 2-3 days of working

2015-02-10 Thread Nick Fisk
Hi Mike, I can also seem to reproduce this behaviour. If I shutdown a Ceph node, the delay while Ceph works out that the OSD's are down seems to trigger similar error messages. It seems fairly reliable that if a OSD is down for more than 10 seconds that LIO will have this problem. Below is an exc

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Here is the updated direct copy/paste dump eph@ceph-node1:~$ ceph osd dump epoch 25 fsid 17bea68b-1634-4cd1-8b2a-00a60ef4761d created 2015-02-08 16:59:07.050875 modified 2015-02-09 22:35:33.191218 flags pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi Beanos: So you have 3 OSD servers and each of them have 2 disks. I have a question. What result of "ceph osd tree". Look like the osd status is "down". Best wishes, Vickie 2015-02-10 19:00 GMT+08:00 B L : > Here is the updated direct copy/paste dump > > eph@ceph-node1:~$ ceph osd dump > epoc

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi Beanos: BTW, if your cluster just for test. You may try to reduce replica size and min_size. "ceph osd pool set rbd size 2;ceph osd pool set data size 2;ceph osd pool set metadata size 2 " "ceph osd pool set rbd min_size 1;ceph osd pool set data min_size 1;ceph osd pool set metadata min_size 1"

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Hi Vickie, My OSD tree looks like this: ceph@ceph-node3:/home/ubuntu$ ceph osd tree # idweight type name up/down reweight -1 0 root default -2 0 host ceph-node1 0 0 osd.0 up 1 1 0 osd.1 up

[ceph-users] Too few pgs per osd - Health_warn for EC pool

2015-02-10 Thread Mohamed Pakkeer
Hi We have created EC pool ( k =10 and m =3) with 540 osds. We followed the following rule to calculate the pgs count for the EC pool. (OSDs * 100) Total PGs = pool size Where *pool size* is either the number of replicas for replicated poo

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
I will try to change the replication size now as you suggested .. but how is that related to the non-healthy cluster? > On Feb 10, 2015, at 1:22 PM, B L wrote: > > Hi Vickie, > > My OSD tree looks like this: > > ceph@ceph-node3:/home/ubuntu$ ceph osd tree > # id weight type name up/d

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
I changed the size and min_size as you suggested while opening the ceph -w on a different window, and I got this: ceph@ceph-node1:~$ ceph -w cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_n

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Hello Vickie, After changing the size and min_size on all the existing pools, the cluster seems to be working, and I can store objects to the cluster .. but the cluster still shows non healthy: cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs degraded; 256 pgs stuck

Re: [ceph-users] stuck with dell perc 710p / (aka mega raid 2208?)

2015-02-10 Thread Daniel Swarbrick
On 10/02/15 11:38, pixelfairy wrote: > since that card refuses jbod, we made them all single disk raid0, then > pulled one as a test. putting it back, its state is "foreign" and > there doesnt seem to be anything that can change this from omconfig, > the om web ui, or idrac web ui. is there any way

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Udo Lembke
Hi, your will get further trouble, because your weight is not correct. You need an weight >= 0.01 for each OSD. This mean, you OSD must be 10GB or greater! Udo Am 10.02.2015 12:22, schrieb B L: > Hi Vickie, > > My OSD tree looks like this: > > ceph@ceph-node3:/home/ubuntu$ ceph osd tree > # i

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Owen Synge
Hi, To add to Udo's point, Do remember that by default journals take ~6Gb. For this reason I suggest making Virtual disks larger than 20Gb for testing although its slightly bigger than absolutely necessary. Best regards Owen On 02/10/2015 01:26 PM, Udo Lembke wrote: > Hi, > your will get fu

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Hello Udo, Thanks for your answer .. 2 questions here: 1- Does what you say mean that I have to remove my drive devices (8GB each) and add new ones with at least 10GB? 2- Shall I manually re-weight after disk creation and preparation using this command (ceph osd reweight osd.2 1.0), or things w

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao
Hello, Your osd does not have weights , please assign some weight to your ceph cluster osds as Udo said in his last comment. osd crush reweightchange 's weight to in crush map sudo ceph osd crush reweight 0.0095 osd.0 to osd.5. Regards, Vikhya

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Thanks Vikhyat, As suggested .. ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 Invalid command: osd.0 doesn't represent a float osd crush reweight : change 's weight to in crush map Error EINVAL: invalid command What do you think > On Feb 10, 2015, at 3:18 PM, Vikhy

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Micha Kersloot
Hi, maybe other way around: = osd.0 0.0095 Met vriendelijke groet, Micha Kersloot Blijf op de hoogte en ontvang de laatste tips over Zimbra/KovoKs Contact: http://twitter.com/kovoks KovoKs B.V. is ingeschreven onder KvK nummer: 1104 > From: "B L" > To: "Vikhyat Umrao" , "Udo Lem

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Udo Lembke
Hi, use: ceph osd crush set 0 0.01 pool=default host=ceph-node1 ceph osd crush set 1 0.01 pool=default host=ceph-node1 ceph osd crush set 2 0.01 pool=default host=ceph-node3 ceph osd crush set 3 0.01 pool=default host=ceph-node3 ceph osd crush set 4 0.01 pool=default host=ceph-node2 ceph osd crush

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vikhyat Umrao
Oh , I have miss placed the places for osd names and weight ceph osd crush reweight osd.0 0.0095 and so on .. Regards, Vikhyat On 02/10/2015 07:31 PM, B L wrote: Thanks Vikhyat, As suggested .. ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 Invalid command: osd.0 doesn

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Thanks for everyone!! After applying the re-weighting command (ceph osd crush reweight osd.0 0.0095), my cluster is getting healthy now :)) But I have one question, what if I have hundreds of OSDs, shall I do the re-weighting on each device, or there is some way to make this happen automatical

Re: [ceph-users] stuck with dell perc 710p / (aka mega raid 2208?)

2015-02-10 Thread Alexandre DERUMIER
Hi, you need to import foreign config from openmanage webui. somewhere in storage controller BTW, I'm currently testing new dell r630 with a perc h330 ( lsi 3008) With this controller, it's possible to do hardware for some disks, and passthrough for some others disks. So, perfect for ceph :)

[ceph-users] combined ceph roles

2015-02-10 Thread David Graham
Hello, I'm giving thought to a minimal footprint scenario with full redundancy. I realize it isn't ideal--and may impact overall performance -- but wondering if the below example would work, supported, or known to cause issue? Example, 3x hosts each running: -- OSD's -- Mon -- Client I thought

[ceph-users] cannot obtain keys from the nodes : [ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on ['ceph-vm01']

2015-02-10 Thread Konstantin Khatskevich
Hello! I am novice in Ceph and I in the desperation. My problem in the fact that I cannot obtain keys from the nodes. I found similar problem in maillist (http://www.spinics.net/lists/ceph-users/msg03843.html), but me did not succeed in solving it. There Francesc Alted writes about the fact

Re: [ceph-users] combined ceph roles

2015-02-10 Thread Lindsay Mathieson
Similar setup works well for me - 2 vm hosts, 1 Mon only mode. 6 osd's, 3 per vm host. Using rbd and cephfs The more memory on your vm hosts, the better. Lindsay Mathieson -Original Message- From: "David Graham" Sent: ‎11/‎02/‎2015 3:07 AM To: "ceph-us...@ceph.com" Subject: [ceph-use

Re: [ceph-users] 答复: Re: can not add osd

2015-02-10 Thread Alan Johnson
Just wondering if this was ever resolved �C I am seeing the exact same issue when I moved from Centos 6.5 firefly to Centos7 on giant release using “ceph-deploy osd prepare . . . ” the script fails to umount and then posts a device is busy message. Details are below in yang bin18’s posting belo

Re: [ceph-users] stuck with dell perc 710p / (aka mega raid 2208?)

2015-02-10 Thread pixelfairy
turns out you can do some stuff with omconfig as long as you enable "auto import" in the cards bios utility. still need the web ui to turn the new disk into a usable block device. have you been able to automate the whole recovery process? id like to just put the new disk in and have the system not

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi The weight is reflect spaces or ability of disks. For example, the weight of 100G OSD disk is 0.100(100G/1T). Best wishes, Vickie 2015-02-10 22:25 GMT+08:00 B L : > Thanks for everyone!! > > After applying the re-weighting command (*ceph osd crush reweight osd.0 > 0.0095*), my cluster is ge

Re: [ceph-users] ceph Performance vs PG counts

2015-02-10 Thread Sumit Gaur
Hi , I am not sure why PG numbers have not given that much importance in the ceph documents, I am seeing huge variation in performance number by changing PG numbers. Just an example *without SSD* : 36 OSD HDD => PG count 2048 gives me random write (1024K bz) performance of 550 MBps *with SSD :*

[ceph-users] Update 0.80.5 to 0.80.8 --the VM's read request become too slow

2015-02-10 Thread 杨万元
Hello! We use Ceph+Openstack in our private cloud. Recently we upgrade our centos6.5 based cluster from Ceph Emperor to Ceph Firefly. At first,we use redhat yum repo epel to upgrade, this Ceph's version is 0.80.5. First upgrade monitor,then osd,last client. when we complete this upgrade, we

[ceph-users] wider rados namespace support?

2015-02-10 Thread Blair Bethwaite
Just came across this in the docs: "Currently (i.e., firefly), namespaces are only useful for applications written on top of librados. Ceph clients such as block device, object storage and file system do not currently support this feature." Then found: https://wiki.ceph.com/Planning/Sideboard/rbd%

Re: [ceph-users] ceph Performance vs PG counts

2015-02-10 Thread Vikhyat Umrao
Hi, Just a heads up I hope , you are aware of this tool: http://ceph.com/pgcalc/ Regards, Vikhyat On 02/11/2015 09:11 AM, Sumit Gaur wrote: Hi , I am not sure why PG numbers have not given that much importance in the ceph documents, I am seeing huge variation in performance number by changin

Re: [ceph-users] Too few pgs per osd - Health_warn for EC pool

2015-02-10 Thread Mohamed Pakkeer
Hi Greg, Do you have any idea about the health warning? Regards K.Mohamed Pakkeer On Tue, Feb 10, 2015 at 4:49 PM, Mohamed Pakkeer wrote: > Hi > > We have created EC pool ( k =10 and m =3) with 540 osds. We followed the > following rule to calculate the pgs count for the EC pool. > >