Re: [ceph-users] Observations with a SSD based pool under Hammer

2016-02-29 Thread Christian Balzer
On Mon, 29 Feb 2016 02:15:28 -0500 (EST) Shinobu Kinjo wrote: > Christian, > > > Ceph: no tuning or significant/relevant config changes, OSD FS is Ext4, > > Ceph journal is inline (journal file). > > Quick question. Is there any reason you selected Ext4? > https://www.mail-archive.com/ceph-user

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Odintsov Vladislav
Hi all, should we build el6 packages ourself or, it's hoped that these packages would be built officially by community? Regards, Vladislav Odintsov From: ceph-devel-ow...@vger.kernel.org on behalf of Franklin M. Siler

Re: [ceph-users] Observations with a SSD based pool under Hammer

2016-02-29 Thread Mark Nelson
On 02/29/2016 02:37 AM, Christian Balzer wrote: On Mon, 29 Feb 2016 02:15:28 -0500 (EST) Shinobu Kinjo wrote: Christian, Ceph: no tuning or significant/relevant config changes, OSD FS is Ext4, Ceph journal is inline (journal file). Quick question. Is there any reason you selected Ext4? ht

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Mario Giammarco
Ferhat Ozkasgarli writes: > 1-) One of the OSD nodes has network problem. > 2-) Disk failure > 3-) Not enough resource for OSD nodes > 4-) Slow OSD Disks I have replaced cables and switches. I am sure that there are no network problems. Disks are SSHD and so they are fast. Nodes memory is empty

Re: [ceph-users] State of Ceph documention

2016-02-29 Thread John Spray
On Fri, Feb 26, 2016 at 10:49 PM, Nigel Williams wrote: > On Fri, Feb 26, 2016 at 11:28 PM, John Spray wrote: >> Some projects have big angry warning banners at the top of their >> master branch documentation, I think perhaps we should do that too, >> and at the same time try to find a way to ste

Re: [ceph-users] ceph hammer : rbd info/Status : operation not supported (95) (EC+RBD tier pools)

2016-02-29 Thread Adrien Gillard
We are likely facing the same kind of issue in our infernalis cluster with EC. >From times to times some of our volumes mounted via the RBD kernel module, will start to "freeze". I can still browse the volume, but the (backup) application using it hangs. I guess it's because it tries to access an

Re: [ceph-users] ceph hammer : rbd info/Status : operation not supported (95) (EC+RBD tier pools)

2016-02-29 Thread Christian Balzer
Hello, On Mon, 29 Feb 2016 11:14:28 +0100 Adrien Gillard wrote: > We are likely facing the same kind of issue in our infernalis cluster > with EC. > Have you tried what Nick Fisk suggested (and which makes perfect sense to me, but I can't test it, no EC pools here)? That is setting the recency

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Shinobu Kinjo
Can we make any kind of general procedure to make packages so that almost everyone in community build packages by themselves and reduce developers work load caused by too much requirement -; Cheers, Shinobu - Original Message - From: "Odintsov Vladislav" To: "Franklin M. Siler" , "Xiao

[ceph-users] Ceph and systemd

2016-02-29 Thread zorg
Hi can someone can just explain how ceph is organize with systemd I can see tha tfor each osd there a unit file in like this /etc/systemd/system/ceph.target.wants/ceph-osd@0.service wich is a symblink to /lib/systemd/system/ceph-osd@.service but there is other service start for ceph like sys-dev

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Odintsov Vladislav
Can you please provide right way for building rpm packages? Regards, Vladislav Odintsov From: Shinobu Kinjo Sent: Monday, February 29, 2016 14:11 To: Odintsov Vladislav Cc: Franklin M. Siler; Xiaoxi Chen; ceph-de...@vger.

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Shinobu Kinjo
> What can I do now? How can I debug? I also would like to know more specific procedure to fix the issue under this situation. Cheers, Shinobu - Original Message - From: "Mario Giammarco" To: ceph-users@lists.ceph.com Sent: Monday, February 29, 2016 6:39:16 PM Subject: Re: [ceph-users]

Re: [ceph-users] ceph hammer : rbd info/Status : operation not supported (95) (EC+RBD tier pools)

2016-02-29 Thread Adrien Gillard
Nope, I have not, as we didn't face the issue for some time now and the less promotion happening, the better for us : this is a cluster for backups and the same disks are used for cache and EC pools at the moment. I will try this if the bug happens again. On Mon, Feb 29, 2016 at 11:40 AM, Christi

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Dimitar Boichev
I am sure that I speak for the majority of people reading this, when I say that I didn't get anything from your emails. Could you provide more debug information ? Like (but not limited to): ceph -s ceph health details ceph osd tree ... I am really having a bad time trying to decode the exact pro

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Alfredo Deza
On Mon, Feb 29, 2016 at 6:30 AM, Odintsov Vladislav wrote: > Can you please provide right way for building rpm packages? Building binaries is tricky. CI has a few steps to be able to get binaries at the end of the process. The actual RPM building is mainly this portion: https://github.com/ceph/c

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Dan van der Ster
On Mon, Feb 29, 2016 at 12:30 PM, Odintsov Vladislav wrote: > Can you please provide right way for building rpm packages? It's documented here: http://docs.ceph.com/docs/master/install/build-ceph/#rpm-package-manager For 0.94.6 you need to change the .spec file to use .tar.gz (because there was

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Josef Johansson
Hi, There is also https://github.com/jordansissel/fpm/wiki I find it quite useful for building deb/rpm. What would be useful for the community per se would be if you made a Dockerfile for each type of combination, i.e. Ubuntu trusty / 10.0.3 and so fo

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Josef Johansson
Maybe the reverse is possible, where we as a community lend out computing resources that the central build system could use. > On 29 Feb 2016, at 14:38, Josef Johansson wrote: > > Hi, > > There is also https://github.com/jordansissel/fpm/wiki > > >

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Josef Johansson
You could sync from me instead @ se.ceph.com As a start. Regards /Josef > On 29 Feb 2016, at 15:19, Florent B wrote: > > I would like to inform you that I have difficulties to set-up a mirror. > > rsync on download.ceph.com is down > > # rsync download.ceph.com:: > rsyn

[ceph-users] osd suddenly down / connect claims to be / heartbeat_check: no reply

2016-02-29 Thread Oliver Dzombic
Hi, i face here some trouble with the cluster. Suddenly "random" OSD's are getting marked out. After restarting the OSD on the specific node, its working again. This happens usually during activated scrubbing/deep scrubbing. In the logs i can see: 2016-02-29 06:08:58.130376 7fd5dae75700 0 --

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Josef Johansson
Then we’re all in the same boat. > On 29 Feb 2016, at 15:30, Florent B wrote: > > Hi and thank you. But for me, you are out of sync as eu.ceph.com. Can't find > Infernalis 9.2.1 on your mirror :( > > On 02/29/2016 03:21 PM, Josef Johansson wrote: >> You could sync from me instead @ se.ceph.com

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Josef Johansson
I’ll check if I can mirror it though http. > On 29 Feb 2016, at 15:31, Josef Johansson wrote: > > Then we’re all in the same boat. > >> On 29 Feb 2016, at 15:30, Florent B > > wrote: >> >> Hi and thank you. But for me, you are out of sync as eu.ceph.com >>

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Josef Johansson
Syncing now. > On 29 Feb 2016, at 15:38, Josef Johansson wrote: > > I’ll check if I can mirror it though http. >> On 29 Feb 2016, at 15:31, Josef Johansson > > wrote: >> >> Then we’re all in the same boat. >> >>> On 29 Feb 2016, at 15:30, Florent B >>

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Loic Dachary
Hi Dan & al, I think it would be relatively simple to have these binaries published as part of the current "Stable release" team effort[1]. Essentially doing what you did and electing a central place to store these binaries. The trick is to find a sustainable way to do this which means having a

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Nathan Cutler
The basic idea is to copy the packages that are build by gitbuilders or by the buildpackage teuthology task in a central place. Because these packages are built, for development versions as well as stable versions[2]. And they are tested via teuthology. The packages that are published on http:/

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Sage Weil
The intention was to continue building stable releases (0.94.x) on the old list of supported platforms (which inclues 12.04 and el6). I think it was just an oversight that they weren't built this time around. I the overhead to doing so is just keeping a 12.04 and el6 jenkins build slave aroun

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Loic Dachary
On 29/02/2016 22:49, Nathan Cutler wrote: >> The basic idea is to copy the packages that are build by gitbuilders or by >> the buildpackage teuthology task in a central place. Because these packages >> are built, for development versions as well as stable versions[2]. And they >> are tested vi

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Loic Dachary
I've created a pad at http://pad.ceph.com/p/development-releases for the next CDM ( see http://tracker.ceph.com/projects/ceph/wiki/Planning for details). On 29/02/2016 22:49, Nathan Cutler wrote: > The basic idea is to copy the packages that are build by gitbuilders or by > the buildpackage teut

Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-02-29 Thread Ken Dreyer
I recommend we simply drop the init scripts from the master branch. All our supported platforms (CentOS 7 or newer, and Ubuntu Trusty or newer) use upstart or systemd. - Ken On Mon, Feb 29, 2016 at 3:44 AM, Florent B wrote: > Hi everyone, > > On a few servers, updated from Hammer to Infernalis,

Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-02-29 Thread Vasu Kulkarni
+1 On Mon, Feb 29, 2016 at 8:36 AM, Ken Dreyer wrote: > I recommend we simply drop the init scripts from the master branch. > All our supported platforms (CentOS 7 or newer, and Ubuntu Trusty or > newer) use upstart or systemd. > > - Ken > > On Mon, Feb 29, 2016 at 3:44 AM, Florent B wrote: > >

[ceph-users] s3 bucket creation time

2016-02-29 Thread Luis Periquito
Hi all, I have a biggish ceph environment and currently creating a bucket in radosgw can take as long as 20s. What affects the time a bucket takes to be created? How can I improve that time? I've tried to create in several "bucket-location" with different backing pools (some of them empty) and t

Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-02-29 Thread ceph
Well, if they are harmless, please let them All people do not (and will not) use systemd nor upstart, please do make life harder for these On 29/02/2016 17:56, Vasu Kulkarni wrote: > +1 > > On Mon, Feb 29, 2016 at 8:36 AM, Ken Dreyer wrote: > >> I recommend we simply drop the init scripts from

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Josef Johansson
Got rpm-infernalis in now, and I’m updating debian-infernalis as well. /Josef > On 29 Feb 2016, at 15:44, Josef Johansson wrote: > > Syncing now. >> On 29 Feb 2016, at 15:38, Josef Johansson > > wrote: >> >> I’ll check if I can mirror it though http. >>> On 29 Feb 2016

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Austin Johnson
All, I agree that rsync is down on download.ceph.com. I get a connection timeout as well. Which makes it seem like an issue of the firewall silently dropping packets. It has been down for at least a few weeks, forcing me to sync from eu, which seems out of date. Tyler - Is there any way that bey

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Dan van der Ster
If it can help, it's really very little work for me to send the hammer SRPM to our Koji build system. I think the real work will come if people starting asking for jewel builds on el6 and other old platforms. In that case, if a reputable organisation offers to maintain the builds (+ deps), then IM

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Wido den Hollander
> Op 29 februari 2016 om 18:22 schreef Austin Johnson : > > > All, > > I agree that rsync is down on download.ceph.com. I get a connection timeout > as well. Which makes it seem like an issue of the firewall silently > dropping packets. > Remember that download.ceph.com is THE source of all d

Re: [ceph-users] Ceph mirrors wanted!

2016-02-29 Thread Josef Johansson
Hm, I should be a bit more updated now. At least for {debian,rpm}-{hammer,infernalis,testing} /Josef > On 29 Feb 2016, at 19:19, Wido den Hollander wrote: > > >> Op 29 februari 2016 om 18:22 schreef Austin Johnson >> : >> >> >> All, >> >> I agree that rsync is down on download.ceph.com. I

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Mario Giammarco
Thank you for your time. Dimitar Boichev writes: > > I am sure that I speak for the majority of people reading this, when I say that I didn't get anything from your emails. > Could you provide more debug information ? > Like (but not limited to): > ceph -s > ceph health details > ceph osd tree

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Mario Giammarco
Mario Giammarco writes: Sorry ceph health detail is: HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean pg 0.0 is stuck inactive for 4836623.776873, current state incomplete, last acting [0,1,3] pg 0.40 is stuck inactive for 2773379.028048, current state incomplete, last a

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Lionel Bouton
Le 29/02/2016 20:43, Mario Giammarco a écrit : > [...] > I said SSHD that is a standard hdd with ssd cache. It is 7200rpms but in > benchmarks it is better than a 1rpm disk. Lies, damn lies and benchmarks... SSHD usually have very small flash caches (16GB or less for 500GB of data or more) and

[ceph-users] Ceph Developer Monthly this Wed!

2016-02-29 Thread Patrick McGarry
Hey cephers, Just a reminder, the monthly dev meeting [0] for ceph developers is this Wed at 9p EST (we are on an APAC-friendly month). If you are currently working on commits to ceph, or would like to be, please join us for a quick rundown of work in progress. If you are able, it would be greatl

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Oliver Dzombic
Hi, i dont know, but as it seems to me: incomplete = not enough data the only solution would be to drop it ( delete ) so the cluster get in active healthy state. How many copies do you do from each data ? -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Mario Giammarco
Oliver Dzombic writes: > > Hi, > > i dont know, but as it seems to me: > > incomplete = not enough data > > the only solution would be to drop it ( delete ) > > so the cluster get in active healthy state. > > How many copies do you do from each data ? > Do you mean dropping the pg not wo

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Shinobu Kinjo
> the fact that they are optimized for benchmarks and certainly not > Ceph OSD usage patterns (with or without internal journal). Are you assuming that SSHD is causing the issue? If you could elaborate on this more, it would be helpful. Cheers, Shinobu - Original Message - From: "Lionel

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Lionel Bouton
Le 29/02/2016 22:50, Shinobu Kinjo a écrit : >> the fact that they are optimized for benchmarks and certainly not >> Ceph OSD usage patterns (with or without internal journal). > Are you assuming that SSHD is causing the issue? > If you could elaborate on this more, it would be helpful. Probably n

Re: [ceph-users] Ceph and Google Summer of Code

2016-02-29 Thread Wido den Hollander
A long wanted feature is mail storage in RADOS: http://tracker.ceph.com/issues/12430 Would that be a good idea? I'd be more than happy to mentor this one. I will probably lack the technical C++ skills, but e-mail storage itself is something I'm very familiar with. Wido > Op 29 februari 2016 om

Re: [ceph-users] Ceph and Google Summer of Code

2016-02-29 Thread David
Great idea! +1 David Majchrzak > 29 feb. 2016 kl. 22:53 skrev Wido den Hollander : > > A long wanted feature is mail storage in RADOS: > http://tracker.ceph.com/issues/12430 > > Would that be a good idea? I'd be more than happy to mentor this one. > > I will probably lack the technical C++ ski

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Nmz
In my free time I`m trying to understand how CEPH tries to detect corrupted data. You can look here http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007680.html Can you try to do md5sum on stucks PG from all OSD? > Oliver Dzombic writes: >> Hi, >> i dont know, but as it se

Re: [ceph-users] Help: pool not responding

2016-02-29 Thread Shinobu Kinjo
> Probably not (unless they reveal themselves extremely unreliable with > Ceph OSD usage patterns which would be surprising to me). Thank you for letting me know your thought. That does make sense. Cheers, - Original Message - From: "Lionel Bouton" To: "Shinobu Kinjo" Cc: "Mario Giamma

Re: [ceph-users] Ceph and Google Summer of Code

2016-02-29 Thread Shinobu Kinjo
Yeah, sounds good. +1 Cheers, - Original Message - From: "David" To: "Wido den Hollander" Cc: "Ceph Devel" , "Ceph-User" , bo...@lists.ceph.com Sent: Tuesday, March 1, 2016 7:22:31 AM Subject: Re: [ceph-users] Ceph and Google Summer of Code Great idea! +1 David Majchrzak > 29 feb. 2

Re: [ceph-users] Ceph and Google Summer of Code

2016-02-29 Thread Patrick McGarry
Hey Wido, That’s a great idea, I’ll add it to the ideas list (and you as a mentor). You, or anyone, should feel free to solicit a student to submit the proposal if you know any. I think this would be a great summer project. Thanks. On Mon, Feb 29, 2016 at 4:53 PM, Wido den Hollander wrote: > A

[ceph-users] Ceph and Google Summer of Code

2016-02-29 Thread Patrick McGarry
Hey cephers, As many of you may have seen by now, Ceph was accepted back for another year of GSoC. I’m asking all of you to make sure that any applicable students that you know consider working with Ceph this year. We’re happy to accept proposals from our ideas list [0], or any custom proposal th

Re: [ceph-users] Fwd: List of SSDs

2016-02-29 Thread Heath Albritton
> Did you just do these tests or did you also do the "suitable for Ceph" > song and dance, as in sync write speed? These were done with libaio, so async. I can do a sync test if that helps. My goal for testing wasn't specifically suitability with ceph, but overall suitability in my environment,

Re: [ceph-users] s3 bucket creation time

2016-02-29 Thread Robin H. Johnson
On Mon, Feb 29, 2016 at 04:58:07PM +, Luis Periquito wrote: > Hi all, > > I have a biggish ceph environment and currently creating a bucket in > radosgw can take as long as 20s. > > What affects the time a bucket takes to be created? How can I improve that > time? > > I've tried to create i

Re: [ceph-users] ext4 external journal - anyone tried this?

2016-02-29 Thread Lindsay Mathieson
On 2/05/2015 6:53 PM, Matthew Monaco wrote: It looks like you can get a pretty good performance benefit from using ext4 with an "external" SSD journal. Has anyone tried this with ceph? Take, for example, a system with a 3:1 HDD to SSD ratio. What are some of your thoughts? Did you ever get a r

Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-02-29 Thread Christian Balzer
Hello, definitely not going to drag this into the systemd versus SysV cesspit (I have very strong feelings about that matter, but that's neither here nor now). For the record, I installed my clusters on Jessie starting with Firefly. Which had crap systemd support, so I couldn't (re)start individ

Re: [ceph-users] osd suddenly down / connect claims to be / heartbeat_check: no reply

2016-02-29 Thread Christian Balzer
Hello, googling for "ceph wrong node" gives us this insightful thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg09960.html I suggest reading through it, more below: On Mon, 29 Feb 2016 15:30:41 +0100 Oliver Dzombic wrote: > Hi, > > i face here some trouble with the cluster. >

[ceph-users] User Interface

2016-02-29 Thread Vlad Blando
Hi, We already have a user interface that is admin facing (ex. calamari, kraken, ceph-dash), how about a client facing interface, that can cater for both block and object store. For object store I can use Swift via Horizon dashboard, but for block-store, I'm not sure how. Thanks. /Vlad ᐧ __

Re: [ceph-users] ceph hammer : rbd info/Status : operation not supported (95) (EC+RBD tier pools)

2016-02-29 Thread Christian Balzer
Hello, just to close this up as far as my upgrade is concerned. I choose not to go for 0.94.6 at this point in time, too fresh off the boat for my taste. Phasing in the cache tier literally caused 10 Minutes of Terror (tm pending), including the first ever "wrongly marked down" OSDs I've seen

[ceph-users] Cannot mount cephfs after some disaster recovery

2016-02-29 Thread 10000
Hi, I meet a trouble on mount the cephfs after doing some disaster recovery introducing by official document(http://docs.ceph.com/docs/master/cephfs/disaster-recovery). Now when I try to mount the cephfs, I get "mount error 5 = Input/output error". When run "ceph -s" on clusters, it

[ceph-users] rbd cache did not help improve performance

2016-02-29 Thread min fang
Hi, I set the following parameters in ceph.conf [client] rbd cache=true rbd cache size= 25769803776 rbd readahead disable after byte=0 map a rbd image to a rbd device then run fio testing on 4k read as the command ./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read -ioengine=aio -bs

Re: [ceph-users] rbd cache did not help improve performance

2016-02-29 Thread Shinobu Kinjo
You may want to set "ioengine=rbd", I guess. Cheers, - Original Message - From: "min fang" To: "ceph-users" Sent: Tuesday, March 1, 2016 1:28:54 PM Subject: [ceph-users] rbd cache did not help improve performance Hi, I set the following parameters in ceph.conf [client] rbd cache=tr

[ceph-users] Replacing OSD drive without rempaping pg's

2016-02-29 Thread Lindsay Mathieson
I was looking at replacing an osd drive in place as per the procedure here: http://www.spinics.net/lists/ceph-users/msg05959.html "If you are going to replace the drive immediately, set the “noout” flag. Take the OSD “down” and replace drive. Assuming it is mounted in the same place a

Re: [ceph-users] rbd cache did not help improve performance

2016-02-29 Thread Tom Christensen
If you are mapping the RBD with the kernel driver then you're not using librbd so these settings will have no effect I believe. The kernel driver does its own caching but I don't believe there are any settings to change its default behavior. On Mon, Feb 29, 2016 at 9:36 PM, Shinobu Kinjo wrote: