[ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread Andrei Mikhailovsky
Hello guys, Was wondering if anyone has tried using the Crucial MX100 ssds either for osd journals or cache pool? It seems like a good cost effective alternative to the more expensive drives and read/write performance is very good as well. Thanks -- Andrei Mikhailovsky Director Arhont In

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread David
Performance seems quite low on those. I’d really step it up to intel s3700s. Check the performance benchmarks here and compare between them: http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3 http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/3 If you’re going

Re: [ceph-users] cache pool osds crashing when data is evicting to underlying storage pool

2014-08-01 Thread Kenneth Waegeman
- Message from Sage Weil - Date: Thu, 31 Jul 2014 08:51:34 -0700 (PDT) From: Sage Weil Subject: Re: [ceph-users] cache pool osds crashing when data is evicting to underlying storage pool To: Kenneth Waegeman Cc: ceph-users Hi Kenneth, On Thu, 31 Jul 2014, Ke

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread Christian Balzer
On Fri, 1 Aug 2014 09:38:34 +0100 (BST) Andrei Mikhailovsky wrote: > Hello guys, > > Was wondering if anyone has tried using the Crucial MX100 ssds either > for osd journals or cache pool? It seems like a good cost effective > alternative to the more expensive drives and read/write performance i

Re: [ceph-users] Persistent Error on osd activation

2014-08-01 Thread debian Only
i have meet the same issue , when i want to use prepare . when i use --zap-disk , it is ok. but if use prepare to define journal device, failed ceph-disk-prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/sdc 2014-07-01 1:00 GMT+07:00 Iban Cabrillo : > Hi Alfredo, > During t

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread Andrei Mikhailovsky
Thanks for your comments. Andrei -- Andrei Mikhailovsky Director Arhont Information Security Web: http://www.arhont.com http://www.wi-foo.com Tel: +44 (0)870 4431337 Fax: +44 (0)208 429 3111 PGP: Key ID - 0x2B3438DE PGP: Server - keyserver.pgp.com DISCLAIMER The information contai

Re: [ceph-users] Using Ramdisk wi

2014-08-01 Thread debian Only
i am looking for the method how to ramdisk with Ceph , just for test environment, i do not have enough SSD for each osd. but do not how to move osd journal to a tmpfs or ramdisk. hope some one can give some guide. 2014-07-31 8:58 GMT+07:00 Christian Balzer : > > On Wed, 30 Jul 2014 18:17:16 +

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 12:29 AM, German Anders wrote: > Hi Ilya, > I think you need to upgrade the kernel version of that ubuntu server, > I've a similar problem and after upgrade the kernel to 3.13 the problem was > resolved successfully. Ilya doesn't need to upgrade anything ;) Larry, if

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread German Anders
Ilya how are you? Thats cool tell me if changing the tunable and disabling the hashpspool works for u. I've done those things but didn't work either so that's why i went for the kern upgrade Best regards Enviado desde mi Personal Samsung GT-i8190L Original message From: Ilya

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Gregory Farnum
We appear to have solved this and then immediately re-broken it by ensuring that the userspace daemons will set a new required feature bit if there are any EC rules in the OSDMap. I was going to say there's a ticket open for it, but I can't find one... -Greg On Fri, Aug 1, 2014 at 7:22 AM, Ilya Dr

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum wrote: > We appear to have solved this and then immediately re-broken it by > ensuring that the userspace daemons will set a new required feature > bit if there are any EC rules in the OSDMap. I was going to say > there's a ticket open for it, but I c

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Christopher O'Connell
I'm having the exact same problem. I'll try solving it without upgrading the kernel. On Aug 1, 2014 4:22 AM, "Ilya Dryomov" wrote: > On Fri, Aug 1, 2014 at 12:29 AM, German Anders > wrote: > > Hi Ilya, > > I think you need to upgrade the kernel version of that ubuntu > server, > > I've a s

[ceph-users] [ANN] ceph-deploy 1.5.10 released

2014-08-01 Thread Alfredo Deza
Hi All, There is a new release of ceph-deploy, the easy deployment tool for Ceph. This release comes with a few improvements towards better usage of ceph-disk on remote nodes, with more verbosity so things are a bit more clear when they execute. The full list of fixes for this release can be fou

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Larry Liu
IIlya, Sorry for my delayed reply. It happens on a new cluster I just created. I'm just testing right out of the default rbd pool. On Aug 1, 2014, at 5:22 AM, Ilya Dryomov wrote: > On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum wrote: >> We appear to have solved this and then immediately re-b

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Larry Liu
Looking forward your solution. > On Aug 1, 2014, at 5:28 AM, "Christopher O'Connell" > wrote: > > I'm having the exact same problem. I'll try solving it without upgrading the > kernel. > >> On Aug 1, 2014 4:22 AM, "Ilya Dryomov" wrote: >> On Fri, Aug 1, 2014 at 12:29 AM, German Anders wro

[ceph-users] Placement groups forever in "creating" state and dont map to OSD

2014-08-01 Thread Yogesh_Devi
Dell - Internal Use - Confidential Hello Ceph Experts :) , I am using ceph ( ceph version 0.56.6) on Suse linux. I created a simple cluster with one monitor server and two OSDs . The conf file is attached When start my cluster - and do "ceph -s" - I see following message $ceph -s" health HEALT

Re: [ceph-users] cache pool osds crashing when data is evicting to underlying storage pool

2014-08-01 Thread Sage Weil
On Fri, 1 Aug 2014, Kenneth Waegeman wrote: > > On Thu, 31 Jul 2014, Kenneth Waegeman wrote: > > > Hi all, > > > > > > We have a erasure coded pool 'ecdata' and a replicated pool 'cache' acting > > > as > > > writeback cache upon it. > > > When running 'rados -p ecdata bench 1000 write', it starts

[ceph-users] Ιnstrumenting RADOS with Zipkin + LTTng

2014-08-01 Thread Marios-Evaggelos Kogias
Hello all, my name is Marios Kogias and I am a student at the National Technical University of Athens. As part of my diploma thesis and my participation in Google Summer of Code 2014 (in the LTTng organization) I am working on a low-overhead tracing infrastructure for distributed systems. I am als

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Larry Liu
oot@u12ceph02:~# rbd map foo --pool rbd --name client.admin -m u12ceph01 -k /etc/ceph/ceph.client.admin.keyring rbd: add failed: (5) Input/output error dmesg shows these right away after the IO error: [ 461.010895] libceph: mon0 10.190.10.13:6789 feature set mismatch, my 4a042aca < server's 2104

[ceph-users] Some questions of radosgw

2014-08-01 Thread Osier Yang
Hi, list, I managed to setup radosgw in testing environment to see if it's stable/mature enough for production use these several days. In the meanwhile, I tried to read the source code of radosgw to understand how it actually manages the underlying storage. The testing result shows the the w

Re: [ceph-users] Some questions of radosgw

2014-08-01 Thread Osier Yang
[ correct the URL ] On 2014年08月02日 00:42, Osier Yang wrote: Hi, list, I managed to setup radosgw in testing environment to see if it's stable/mature enough for production use these several days. In the meanwhile, I tried to read the source code of radosgw to understand how it actually manage

[ceph-users] Ceph writes stall for long perioids with no disk/network activity

2014-08-01 Thread Mariusz Gronczewski
Hi, when I am running rados bench -p benchmark 300 write --run-name bench --no-cleanup I got weird stalling during writes, sometimes I got same write speed for few minutes and after some time it starts stalling with 0 MB/s for minutes My configuration: ceph 0.80.5 pool 0 'data' replicate

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 8:34 PM, Larry Liu wrote: > oot@u12ceph02:~# rbd map foo --pool rbd --name client.admin -m u12ceph01 -k > /etc/ceph/ceph.client.admin.keyring > rbd: add failed: (5) Input/output error > dmesg shows these right away after the IO error: > [ 461.010895] libceph: mon0 10.190.1

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 4:22 PM, Ilya Dryomov wrote: > On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum wrote: >> We appear to have solved this and then immediately re-broken it by >> ensuring that the userspace daemons will set a new required feature >> bit if there are any EC rules in the OSDMap.

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 10:06 PM, Ilya Dryomov wrote: > On Fri, Aug 1, 2014 at 4:22 PM, Ilya Dryomov wrote: >> On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum wrote: >>> We appear to have solved this and then immediately re-broken it by >>> ensuring that the userspace daemons will set a new requir

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Sage Weil
On Fri, 1 Aug 2014, Ilya Dryomov wrote: > On Fri, Aug 1, 2014 at 10:06 PM, Ilya Dryomov > wrote: > > On Fri, Aug 1, 2014 at 4:22 PM, Ilya Dryomov > > wrote: > >> On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum wrote: > >>> We appear to have solved this and then immediately re-broken it by > >>>

[ceph-users] Ceph runs great then falters

2014-08-01 Thread Chris Kitzmiller
I have 3 nodes each running a MON and 30 OSDs. When I test my cluster with either rados bench or with fio via a 10GbE client using RBD I get great initial speeds >900MBps and I max out my 10GbE links for a while. Then, something goes wrong the performance falters and the cluster stops responding

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Larry Liu
cruhmap file is attached. I'm running kernel 3.13.0-29-generic after another person suggested. But the kernel upgrade didn't fix anything for me. Thanks! crush Description: Binary data On Aug 1, 2014, at 10:38 AM, Ilya Dryomov wrote: > On Fri, Aug 1, 2014 at 8:34 PM, Larry Liu wrote: >> oot

[ceph-users] Free LinuxCon/CloudOpen Pass

2014-08-01 Thread Patrick McGarry
Hey cephers, Now that OSCON is in our rearview mirror we have started looking to LinuxCon/CloudOpen, which is looming just over two weeks away. If you haven't arranged tickets yet, and would like to go, let us know! We have an extra ticket (maybe two) and we'd love to have you attend and hang ou

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 10:32 PM, Larry Liu wrote: > cruhmap file is attached. I'm running kernel 3.13.0-29-generic after another > person suggested. But the kernel upgrade didn't fix anything for me. Thanks! So there are two problems. First, you either have erasure pools or had them in the past

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Larry Liu
Hi Ilya, thank you so much! I didn't know my crush map was all messed up. Now all is working! I guess it would have worked even without upgrading the kernel from 3.2 to 3.13. On Aug 1, 2014, at 12:48 PM, Ilya Dryomov wrote: > On Fri, Aug 1, 2014 at 10:32 PM, Larry Liu wrote: >> cruhmap

[ceph-users] Firefly OSDs stuck in creating state forever

2014-08-01 Thread Bruce McFarland
Hello, I've run out of ideas and assume I've overlooked something very basic. I've created 2 ceph clusters in the last 2 weeks with different OSD HW and private network fabrics - 1GE and 10GE. I have never been able to get the OSDs to come up to the 'active+clean' state. I have followed your on

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Christopher O'Connell
So I've been having a seemingly similar problem and while trying to follow the steps in this thread, things have gone very south for me. Kernal on OSDs and MONs: 2.6.32-431.20.3.0.1.el6.centos.plus.x86_64 #1 SMP Wed Jul 16 21:27:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Kernal on RBD host: 3.2.0

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-01 Thread Brian Rak
Why do you have a MDS active? I'd suggest getting rid of that at least until you have everything else working. I see you've set nodown on the OSDs, did you have problems with the OSDs flapping? Do the OSDs have broken connectivity between themselves? Do you have some kind of firewall interf

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD & CephFS

2014-08-01 Thread Christopher O'Connell
One additional note, I've got a fair amount of data on the rbd volume, which I need to recover in one way or another. On Fri, Aug 1, 2014 at 2:41 PM, Christopher O'Connell wrote: > So I've been having a seemingly similar problem and while trying to follow > the steps in this thread, things have

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-01 Thread Bruce McFarland
MDS: I assumed that I'd need to bring up a ceph-mds for my cluster at initial bringup. We also intended to modify the CRUSH map such that it's pool is resident to SSD(s). It is one of the areas of the online docs there doesn't seem to be a lot of info on and I haven't spent a lot of time researc

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-01 Thread Brian Rak
What happens if you remove nodown? I'd be interested to see what OSDs it thinks are down. My next thought would be tcpdump on the private interface. See if the OSDs are actually managing to connect to each other. For comparison, when I bring up a cluster of 3 OSDs it goes to HEALTH_OK nearly

Re: [ceph-users] Ceph runs great then falters

2014-08-01 Thread Christian Balzer
Hello, On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote: > I have 3 nodes each running a MON and 30 OSDs. Given the HW you list below, that might be a tall order, particular CPU wise in certain situations. What is your OS running off, HDDs or SSDs? The leveldbs, for the MONs in parti