Re: [ceph-users] low power single disk nodes

2015-04-09 Thread p...@philw.com
Rather expensive option: Applied Micro X-Gene, overkill for a single disk, and only really available in a development kit format right now. Better Option: Ambedded CY7 - 7 nodes in 1U half Depth, 6 positions for

[ceph-users] ceph-osd failure following 0.92 -> 0.94 upgrade

2015-04-09 Thread Dirk Grunwald
Ceph cluster, U14.10 base system, OSD's using BTRFS, journal on same disk as partition (done using ceph-deploy) I had been running 0.92 without (significant) issue. I upgraded to Hammer (0.94) be modifying /etc/apt/sources.list, apt-get update, apt-get upgrade Upgraded and restarted ceph-mon and

Re: [ceph-users] low power single disk nodes

2015-04-09 Thread Mark Nelson
Notice that this is under their emerging technologies section. I don't think you can buy them yet. Hopefully we'll know more as time goes on. :) Mark On 04/09/2015 10:52 AM, Stillwell, Bryan wrote: These are really interesting to me, but how can you buy them? What's the performance like in

Re: [ceph-users] low power single disk nodes

2015-04-09 Thread Mark Nelson
How about drives that run Linux with an ARM processor, RAM, and an ethernet port right on the drive? Notice the Ceph logo. :) https://www.hgst.com/science-of-storage/emerging-technologies/open-ethernet-drive-architecture Mark On 04/09/2015 10:37 AM, Scott Laird wrote: Minnowboard Max? 2 ato

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Gregory Farnum
Did you enable the straw2 stuff? CRUSHV4 shouldn't be required by the cluster unless you made changes to the layout requiring it. If you did, the clients have to be upgraded to understand it. You could disable all the v4 features; that should let them connect again. -Greg On Thu, Apr 9, 2015 at 7

Re: [ceph-users] long blocking with writes on rbds

2015-04-09 Thread Ilya Dryomov
On Wed, Apr 8, 2015 at 7:36 PM, Lionel Bouton wrote: > On 04/08/15 18:24, Jeff Epstein wrote: >> Hi, I'm having sporadic very poor performance running ceph. Right now >> mkfs, even with nodiscard, takes 30 mintes or more. These kind of >> delays happen often but irregularly .There seems to be no c

[ceph-users] Motherboard recommendation?

2015-04-09 Thread Markus Goldberg
Hi, i have a backup-storage with ceph 0,93 As every backup-system it is only been written and hopefully never read. The hardware is 3 Supermicro SC847-cases with 30 SATA-HDDS each (2- and 4-TB-WD-disks) = 250TB I have realized, that the motherboards and CPUs are totally undersized, so i want to

Re: [ceph-users] Motherboard recommendation?

2015-04-09 Thread Markus Goldberg
Hi Mohamed, thank you for your reply. I thougt, there is a SAS-Expander on the backplanes of the SC847, so all drives can be run. Am i wrong? thanks, Markus Am 09.04.2015 um 10:24 schrieb Mohamed Pakkeer: Hi Markus, X10DRH-CT can support only 16 drive as default. If you want to connect mor

Re: [ceph-users] SSD Hardware recommendation

2015-04-09 Thread f...@univ-lr.fr
Hi all, just an update - but an important one - of the previous benchmark with 2 new "10 DWPD class" contenders : - Seagate 1200 - ST200FM0053 - SAS 12Gb/s - Intel DC S3700 - SATA 6Gb/s The graph : http://www.4shared.com/download/yaeJgJiFce/Perf-SSDs-Toshiba-Seagate-Inte.png?lgfp=3000

[ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Kyle Hutson
I upgraded from giant to hammer yesterday and now 'ceph -w' is constantly repeating this message: 2015-04-09 08:50:26.318042 7f95dbf86700 0 -- 10.5.38.1:0/2037478 >> 10.5.38.1:6789/0 pipe(0x7f95e00256e0 sd=3 :39489 s=1 pgs=0 cs=0 l=1 c=0x7f95e0023670).connect protocol feature mismatch, my 3ff

Re: [ceph-users] MDS unmatched rstat after upgrade hammer

2015-04-09 Thread Scottix
Alright sounds good. Only one comment then: >From an IT/ops perspective all I see is ERR and that raises red flags. So the exposure of the message might need some tweaking. In production I like to be notified of an issue but have reassurance it was fixed within the system. Best Regards On Wed, A

Re: [ceph-users] low power single disk nodes

2015-04-09 Thread Scott Laird
Minnowboard Max? 2 atom cores, 1 SATA port, and a real (non-USB) Ethernet port. On Thu, Apr 9, 2015, 8:03 AM p...@philw.com wrote: > Rather expensive option: > > Applied Micro X-Gene, overkill for a single disk, and only really > available in a > development kit format right now. > >

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Gregory Farnum
Can you dump your crush map and post it on pastebin or something? On Thu, Apr 9, 2015 at 7:26 AM, Kyle Hutson wrote: > Nope - it's 64-bit. > > (Sorry, I missed the reply-all last time.) > > On Thu, Apr 9, 2015 at 9:24 AM, Gregory Farnum wrote: >> >> [Re-added the list] >> >> Hmm, I'm checking th

Re: [ceph-users] Motherboard recommendation?

2015-04-09 Thread Mohamed Pakkeer
Hi Markus, I think,if you connect more than 16 drives on back plane,X10DRH-CT will detect and show only 16 drives in BIOS. I am not sure about that. If you test this motherboard, please let me know the result. Msg form supermicro site LSI 3108 SAS3 (12Gbps) controller; - 2GB cache; HW RAID 0

Re: [ceph-users] Recovering incomplete PGs with ceph_objectstore_tool

2015-04-09 Thread Paul Evans
Congrats Chris and nice "save" on that RBD! -- Paul > On Apr 9, 2015, at 11:11 AM, Chris Kitzmiller > wrote: > > Success! Hopefully my notes from the process will help: > > In the event of multiple disk failures the cluster could lose PGs. Should > this occur it is best to attempt to restar

Re: [ceph-users] cache-tier do not evict

2015-04-09 Thread Patrik Plank
Hi, ceph version 0.87.1 thanks best regards -Original message- From: Chu Duc Minh  Sent: Thursday 9th April 2015 15:03 To: Patrik Plank Cc: ceph-users@lists.ceph.com >> ceph-users@lists.ceph.com Subject: Re: [ceph-users] cache-tier do not evict What ceph version do you use? Re

Re: [ceph-users] OSDs not coming up on one host

2015-04-09 Thread Jacob Reid
On Wed, Apr 08, 2015 at 03:42:29PM +, Gregory Farnum wrote: > Im on my phone so can't check exactly what those threads are trying to do, > but the osd has several threads which are stuck. The FileStore threads are > certainly trying to access the disk/local filesystem. You may not have a > hard

Re: [ceph-users] long blocking with writes on rbds

2015-04-09 Thread Christian Balzer
On Thu, 09 Apr 2015 00:25:08 -0400 Jeff Epstein wrote: Running Ceph on AWS is, as was mentioned before, certainly not going to improve things when compared to real HW. At the very least it will make performance unpredictable. Your 6 OSDs are on a single VM from what I gather? Aside from being a v

Re: [ceph-users] long blocking with writes on rbds

2015-04-09 Thread Jeff Epstein
On 04/09/2015 03:14 AM, Christian Balzer wrote: Your 6 OSDs are on a single VM from what I gather? Aside from being a very small number for something that you seem to be using in some sort of production environment (Ceph gets faster the more OSDs you add), where is the redundancy, HA in that?

Re: [ceph-users] MDS unmatched rstat after upgrade hammer

2015-04-09 Thread Scottix
I fully understand, why it is just a comment :) Can't wait for scrub. Thanks! On Thu, Apr 9, 2015 at 10:13 AM John Spray wrote: > > > On 09/04/2015 17:09, Scottix wrote: > > Alright sounds good. > > > > Only one comment then: > > From an IT/ops perspective all I see is ERR and that raises red

Re: [ceph-users] Ceph Hammer : Ceph-deploy 1.5.23-0 : RGW civetweb :: Not getting installed

2015-04-09 Thread Iain Geddes
Hi Vickey, The keyring gets created as part of the initial deployment so it should be on your admin node right alongside the admin keyring etc. FWIW, I tried this quickly yesterday and it failed because the RGW directory didn't exist on the node that I was attempting to deploy to ... but I didn't

[ceph-users] cache-tier do not evict

2015-04-09 Thread Patrik Plank
Hi, i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm pool. these are my settings : ceph osd tier add kvm cache-pool ceph osd tier cache-mode cache-pool writeback ceph osd tier set-overlay kvm cache-pool ceph osd pool set cache-pool hit_set_type bloom ceph osd poo

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Gregory Farnum
[Re-added the list] Hmm, I'm checking the code and that shouldn't be possible. What's your ciient? (In particular, is it 32-bit? That's the only thing i can think of that might have slipped through our QA.) On Thu, Apr 9, 2015 at 7:17 AM, Kyle Hutson wrote: > I did nothing to enable anything els

Re: [ceph-users] cache-tier do not evict

2015-04-09 Thread Patrik Plank
Hi, set the cache-tier size to 644245094400. This should work. But it is the same. thanks regards -Original message- From: Gregory Farnum  Sent: Thursday 9th April 2015 15:44 To: Patrik Plank Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] cache-tier do not evict On T

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Kyle Hutson
http://people.beocat.cis.ksu.edu/~kylehutson/crushmap On Thu, Apr 9, 2015 at 11:25 AM, Gregory Farnum wrote: > Hmmm. That does look right and neither I nor Sage can come up with > anything via code inspection. Can you post the actual binary crush map > somewhere for download so that we can inspe

Re: [ceph-users] OSDs not coming up on one host

2015-04-09 Thread Jacob Reid
On Thu, Apr 09, 2015 at 08:46:07AM -0700, Gregory Farnum wrote: > On Thu, Apr 9, 2015 at 8:14 AM, Jacob Reid > wrote: > > On Thu, Apr 09, 2015 at 06:43:45AM -0700, Gregory Farnum wrote: > >> You can turn up debugging ("debug osd = 10" and "debug filestore = 10" > >> are probably enough, or maybe

Re: [ceph-users] Recovering incomplete PGs with ceph_objectstore_tool

2015-04-09 Thread Chris Kitzmiller
Success! Hopefully my notes from the process will help: In the event of multiple disk failures the cluster could lose PGs. Should this occur it is best to attempt to restart the OSD process and have the drive marked as up+out. Marking the drive as out will cause data to flow off the drive to el

Re: [ceph-users] low power single disk nodes

2015-04-09 Thread Quentin Hartman
I'm skeptical about how well this would work, but a Banana Pi might be a place to start. Like a raspberry pi, but it has a SATA connector: http://www.bananapi.org/ On Thu, Apr 9, 2015 at 3:18 AM, Jerker Nyberg wrote: > > Hello ceph users, > > Is anyone running any low powered single disk nodes w

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Kyle Hutson
Here 'tis: https://dpaste.de/POr1 On Thu, Apr 9, 2015 at 9:49 AM, Gregory Farnum wrote: > Can you dump your crush map and post it on pastebin or something? > > On Thu, Apr 9, 2015 at 7:26 AM, Kyle Hutson wrote: > > Nope - it's 64-bit. > > > > (Sorry, I missed the reply-all last time.) > > > >

[ceph-users] Rebuild bucket index

2015-04-09 Thread Laurent Barbe
Hello ceph users, Do you know a way to rebuild a bucket index ? I would like to change the num_shards for an existing bucket. If I change this value in bucket meta, the new index objects are well created, but empty (bucket listing return null). It would be nice to be able to recreate the index

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Kyle Hutson
Nope - it's 64-bit. (Sorry, I missed the reply-all last time.) On Thu, Apr 9, 2015 at 9:24 AM, Gregory Farnum wrote: > [Re-added the list] > > Hmm, I'm checking the code and that shouldn't be possible. What's your > ciient? (In particular, is it 32-bit? That's the only thing i can > think of th

[ceph-users] installing and updating while leaving osd drive data intact

2015-04-09 Thread Deneau, Tom
Referencing this old thread below, I am wondering what is the proper way to install say new versions of ceph and start up daemons but keep all the data on the osd drives. I had been using ceph-deploy new which I guess creates a new cluster fsid. Normally for my testing I had been starting with cle

Re: [ceph-users] OSDs not coming up on one host

2015-04-09 Thread Gregory Farnum
On Thu, Apr 9, 2015 at 8:14 AM, Jacob Reid wrote: > On Thu, Apr 09, 2015 at 06:43:45AM -0700, Gregory Farnum wrote: >> You can turn up debugging ("debug osd = 10" and "debug filestore = 10" >> are probably enough, or maybe 20 each) and see what comes out to get >> more information about why the th

Re: [ceph-users] low power single disk nodes

2015-04-09 Thread Quentin Hartman
Where's the "take my money" button? On Thu, Apr 9, 2015 at 9:43 AM, Mark Nelson wrote: > How about drives that run Linux with an ARM processor, RAM, and an > ethernet port right on the drive? Notice the Ceph logo. :) > > https://www.hgst.com/science-of-storage/emerging- > technologies/open-ethe

Re: [ceph-users] cache-tier do not evict

2015-04-09 Thread Gregory Farnum
On Thu, Apr 9, 2015 at 4:56 AM, Patrik Plank wrote: > Hi, > > > i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm > pool. > > these are my settings : > > > ceph osd tier add kvm cache-pool > > ceph osd tier cache-mode cache-pool writeback > > ceph osd tier set-overlay kvm cac

Re: [ceph-users] use ZFS for OSDs

2015-04-09 Thread Michal Kozanecki
I had surgery and have been off for a while. Had to rebuild test ceph+openstack cluster with whatever spare parts I had. I apologize for the delay for anyone who's been interested. Here are the results; == Hardware/Software 3 node CEPH cluster, 3 O

Re: [ceph-users] ceph-osd failure following 0.92 -> 0.94 upgrade

2015-04-09 Thread Gregory Farnum
On Thu, Apr 9, 2015 at 2:05 PM, Dirk Grunwald wrote: > Ceph cluster, U14.10 base system, OSD's using BTRFS, journal on same disk as > partition > (done using ceph-deploy) > > I had been running 0.92 without (significant) issue. I upgraded > to Hammer (0.94) be modifying /etc/apt/sources.list, apt-

Re: [ceph-users] Motherboard recommendation?

2015-04-09 Thread Mohamed Pakkeer
Hi Markus, X10DRH-CT can support only 16 drive as default. If you want to connect more drive,there is a special SKU for more drive support from super micro or you need additional SAS controller. We are using 2630 V3( 8 core - 2.4GHz) *2 for 30 drives on SM X10DRI-T. It is working perfectly on repl

Re: [ceph-users] Firefly - Giant : CentOS 7 : install failed ceph-deploy

2015-04-09 Thread Ken Dreyer
On 04/08/2015 03:00 PM, Travis Rhoden wrote: > Hi Vickey, > > The easiest way I know of to get around this right now is to add the > following line in section for epel in /etc/yum.repos.d/epel.repo > > exclude=python-rados python-rbd > > So this is what my epel.repo file looks like: http://fpast

Re: [ceph-users] OSDs not coming up on one host

2015-04-09 Thread Gregory Farnum
You can turn up debugging ("debug osd = 10" and "debug filestore = 10" are probably enough, or maybe 20 each) and see what comes out to get more information about why the threads are stuck. But just from the log my answer is the same as before, and now I don't trust that controller (or maybe its d

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Gregory Farnum
Hmmm. That does look right and neither I nor Sage can come up with anything via code inspection. Can you post the actual binary crush map somewhere for download so that we can inspect it with our tools? -Greg On Thu, Apr 9, 2015 at 7:57 AM, Kyle Hutson wrote: > Here 'tis: > https://dpaste.de/POr1

Re: [ceph-users] "protocol feature mismatch" after upgrading to Hammer

2015-04-09 Thread Kyle Hutson
This particular problem I just figured out myself ('ceph -w' was still running from before the upgrade, and ctrl-c and restarting solved that issue), but I'm still having a similar problem on the ceph client: libceph: mon19 10.5.38.20:6789 feature set mismatch, my 2b84a042aca < server's 102b84a042

Re: [ceph-users] cache-tier do not evict

2015-04-09 Thread Chu Duc Minh
What ceph version do you use? Regards, On 9 Apr 2015 18:58, "Patrik Plank" wrote: > Hi, > > > i have build a cach-tier pool (replica 2) with 3 x 512gb ssd for my kvm > pool. > > these are my settings : > > > ceph osd tier add kvm cache-pool > > ceph osd tier cache-mode cache-pool writeback > >

[ceph-users] low power single disk nodes

2015-04-09 Thread Jerker Nyberg
Hello ceph users, Is anyone running any low powered single disk nodes with Ceph now? Calxeda seems to be no more according to Wikipedia. I do not think HP moonshot is what I am looking for - I want stand-alone nodes, not server cartridges integrated into server chassis. And I do not want to b

[ceph-users] Ceph Hammer : Ceph-deploy 1.5.23-0 : RGW civetweb :: Not getting installed

2015-04-09 Thread Vickey Singh
Hello Cephers I am trying to setup RGW using Ceph-deploy which is described here http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance But unfortunately it doesn't seems to be working Is there something i am missing or you know some fix for this. [root@ceph-node1 y

Re: [ceph-users] low power single disk nodes

2015-04-09 Thread Stillwell, Bryan
These are really interesting to me, but how can you buy them? What's the performance like in ceph? Are they using the keyvaluestore backend, or something specific to these drives? Also what kind of chassis do they go into (some kind of ethernet JBOD)? Bryan On 4/9/15, 9:43 AM, "Mark Nelson" w

Re: [ceph-users] RBD hard crash on kernel 3.10

2015-04-09 Thread Shawn Edwards
Thanks for the pointer to the patched kernel. I'll give that a shot. On Thu, Apr 9, 2015, 5:56 AM Ilya Dryomov wrote: > On Wed, Apr 8, 2015 at 5:25 PM, Shawn Edwards > wrote: > > We've been working on a storage repository for xenserver 6.5, which uses > the > > 3.10 kernel (ug). I got the xen

Re: [ceph-users] RBD hard crash on kernel 3.10

2015-04-09 Thread Ilya Dryomov
On Wed, Apr 8, 2015 at 5:25 PM, Shawn Edwards wrote: > We've been working on a storage repository for xenserver 6.5, which uses the > 3.10 kernel (ug). I got the xenserver guys to include the rbd and libceph > kernel modules into the 6.5 release, so that's at least available. > > Where things go

Re: [ceph-users] Firefly - Giant : CentOS 7 : install failed ceph-deploy

2015-04-09 Thread Vickey Singh
Thanks for the help guys , here is my feedback with the tests @Michael Kidd :yum install ceph ceph-common --disablerepo=base --disablerepo=epel Did not worked here are the logs http://fpaste.org/208828/56448714/ @Travis Rhoden : Yep *exclude=python-rados python-rbd* under epel.repo did t

Re: [ceph-users] Cascading Failure of OSDs

2015-04-09 Thread HEWLETT, Paul (Paul)** CTR **
I use the folowing: cat /sys/class/net/em1/statistics/rx_bytes for the em1 interface all other stats are available Paul Hewlett Senior Systems Engineer Velocix, Cambridge Alcatel-Lucent t: +44 1223 435893 m: +44 7985327353 From: ceph-users [ceph-user

Re: [ceph-users] MDS unmatched rstat after upgrade hammer

2015-04-09 Thread John Spray
On 09/04/2015 17:09, Scottix wrote: Alright sounds good. Only one comment then: From an IT/ops perspective all I see is ERR and that raises red flags. So the exposure of the message might need some tweaking. In production I like to be notified of an issue but have reassurance it was fixed w