[ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-10 Thread Mike Almateia
Hello. For our CCTV storing streams project we decided to use Ceph cluster with EC pool. Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30 day storing, 99% write operations, a cluster must has grow up with out downtime. By now our vision of architecture it like: * 6 J

[ceph-users] ceph mds operations

2015-11-10 Thread Kenneth Waegeman
Hi all, Is there a way to see what an MDS is actually doing? We are testing metadata operations, but in the ceph status output only see about 50 ops/s : client io 90791 kB/s rd, 54 op/s Our active ceph-mds is using a lot of cpu and 25GB of memory, so I guess it is doing a lot of operations fr

[ceph-users] Chown in Parallel

2015-11-10 Thread Nick Fisk
I'm currently upgrading to Infernalis and the chown stage is taking a log time on my OSD nodes. I've come up with this little one liner to run the chown's in parallel find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph NOTE: You still need to make sure

Re: [ceph-users] Chown in Parallel

2015-11-10 Thread Jan Schermer
I would just disable barriers and enable them afterwards(+sync), should be a breeze then. Jan > On 10 Nov 2015, at 12:58, Nick Fisk wrote: > > I’m currently upgrading to Infernalis and the chown stage is taking a log > time on my OSD nodes. I’ve come up with this little one liner to run the

Re: [ceph-users] Chown in Parallel

2015-11-10 Thread Nick Fisk
I’m looking at iostat and most of the IO is read, so I think it would still take a while if it was still single threaded Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.500.005.50

Re: [ceph-users] Chown in Parallel

2015-11-10 Thread Jan Schermer
Interesting. I have all the inodes in cache on my nodes so I expect the bottleneck to be filesystem metadata -> journal writes. Unless something else is going on in here ;-) Jan > On 10 Nov 2015, at 13:19, Nick Fisk wrote: > > I’m looking at iostat and most of the IO is read, so I think it wo

[ceph-users] Permanent MDS restarting under load

2015-11-10 Thread Oleksandr Natalenko
Hello. We have CephFS deployed over Ceph cluster (0.94.5). We experience constant MDS restarting under high IOPS workload (e.g. rsyncing lots of small mailboxes from another storage to CephFS using ceph-fuse client). First, cluster health goes to HEALTH_WARN state with the following disclaime

Re: [ceph-users] Problem with infernalis el7 package

2015-11-10 Thread Kenneth Waegeman
On 10/11/15 02:07, c...@dolphin-it.de wrote: Hello, I filed a new ticket: http://tracker.ceph.com/issues/13739 Regards, Kevin [ceph-users] Problem with infernalis el7 package (10-Nov-2015 1:57) From: Bob R To:ceph-users@lists.ceph.com Hello, We've got two problems trying to update our

Re: [ceph-users] Problem with infernalis el7 package

2015-11-10 Thread Kenneth Waegeman
Because our problem was not related to ceph-deploy, I created a new ticket: http://tracker.ceph.com/issues/13746 On 10/11/15 16:53, Kenneth Waegeman wrote: On 10/11/15 02:07, c...@dolphin-it.de wrote: Hello, I filed a new ticket: http://tracker.ceph.com/issues/13739 Regards, Kevin [ceph-u

Re: [ceph-users] all three mons segfault at same time

2015-11-10 Thread Logan V.
I am in the process of upgrading a cluster with mixed 0.94.2/0.94.3 to 0.94.5 this morning and am seeing identical crashes. In the process of doing a rolling upgrade across the mons this morning, after the 3rd of 3 mons was restarted to 0.94.5, all 3 crashed simultaneously identical to what you are

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-10 Thread Paul Evans
Mike - unless things have changed in the latest versions(s) of Ceph, I *not* believe CRUSH will be successful in creating a valid PG map if the ’n' value is 10 (k+m), your host count is 6, and your failure domain is set to host. You’ll need to increase your host count to match or exceed ’n', ch

Re: [ceph-users] ceph mds operations

2015-11-10 Thread John Spray
On Tue, Nov 10, 2015 at 11:46 AM, Kenneth Waegeman wrote: > Hi all, > > Is there a way to see what an MDS is actually doing? We are testing metadata > operations, but in the ceph status output only see about 50 ops/s : client > io 90791 kB/s rd, 54 op/s > Our active ceph-mds is using a lot of cpu

Re: [ceph-users] all three mons segfault at same time

2015-11-10 Thread Logan V.
I am on trusty also but my /var/lib/ceph/mon lives on an xfs filesystem. My mons seem to have stabilized now after upgrading the last of the OSDs to 0.94.5. No crashes in the last 20 minutes whereas they were crashing every 1-2 minutes in a rolling fashion the entire time I was upgrading OSDs. On

Re: [ceph-users] Ceph Openstack deployment

2015-11-10 Thread Iban Cabrillo
Hi Vasily, Did you see anything interesting in the logs?? I do not really kown where else look for. Everything seems to be ok for me. Any help will be very appreciated. 2015-11-06 15:29 GMT+01:00 Iban Cabrillo : > Hi Vasily, > Of course, > from cinder-volume.log > > 2015-11-06 12:28:52.865

Re: [ceph-users] cephfs: Client hp-s3-r4-compute failing torespondtocapabilityrelease

2015-11-10 Thread Gregory Farnum
Can you dump the metadata ops in flight on each ceph-fuse when it hangs? ceph daemon mds_requests -Greg On Mon, Nov 9, 2015 at 8:06 AM, Burkhard Linke wrote: > Hi, > > On 11/09/2015 04:03 PM, Gregory Farnum wrote: >> >> On Mon, Nov 9, 2015 at 6:57 AM, Burkhard Linke >> wrote: >>> >>> Hi, >>>

Re: [ceph-users] Permanent MDS restarting under load

2015-11-10 Thread Gregory Farnum
On Tue, Nov 10, 2015 at 6:32 AM, Oleksandr Natalenko wrote: > Hello. > > We have CephFS deployed over Ceph cluster (0.94.5). > > We experience constant MDS restarting under high IOPS workload (e.g. > rsyncing lots of small mailboxes from another storage to CephFS using > ceph-fuse client). First,

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-10 Thread Vickey Singh
On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander wrote: > On 11/09/2015 05:27 PM, Vickey Singh wrote: > > Hello Ceph Geeks > > > > Need your comments with my understanding on straw2. > > > >- Is Straw2 better than straw ? > > It is not persé better then straw(1). > > straw2 distributes data

[ceph-users] Issue activating OSDs

2015-11-10 Thread James Gallagher
Hi, I'm having issues activating my OSDs. I have provided the output of the fault. I can see that the error message has said that the connection is timing out however, I am struggling to understand why as I have followed each stage within the quick start guide. For example, I can ping node1 (which

Re: [ceph-users] v9.2.0 Infernalis released

2015-11-10 Thread Alfredo Deza
On Sun, Nov 8, 2015 at 10:41 PM, Alexandre DERUMIER wrote: > Hi, > > debian repository seem to miss librbd1 package for debian jessie > > http://download.ceph.com/debian-infernalis/pool/main/c/ceph/ > > (ubuntu trusty librbd1 is present) This is now fixed and should be now available. > > > -

Re: [ceph-users] Problem with infernalis el7 package

2015-11-10 Thread Ken Dreyer
Yeah, this was our bad. As indicated in http://tracker.ceph.com/issues/13746 , Alfredo rebuilt CentOS 7 infernalis packages so that they don't have this dependency, re-signed them, and re-uploaded them to the same location. Please clear your yum cache (`yum makecache`) and try again. On Tue, Nov 1

Re: [ceph-users] Upgrade from Hammer to Infernalis with ceph-deploy fails on Centos7

2015-11-10 Thread Ken Dreyer
On Mon, Nov 9, 2015 at 5:20 PM, Jason Altorf wrote: > On Tue, Nov 10, 2015 at 7:34 AM, Ken Dreyer wrote: >> It is not a known problem. Mind filing a ticket @ >> http://tracker.ceph.com/ so we can track the fix for this? >> >> On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de >> wrote: >>> >>>

Re: [ceph-users] Problem with infernalis el7 package

2015-11-10 Thread Ken Dreyer
On Mon, Nov 9, 2015 at 6:03 PM, Bob R wrote: > We've got two problems trying to update our cluster to infernalis- This was our bad. As indicated in http://tracker.ceph.com/issues/13746 , Alfredo rebuilt CentOS 7 infernalis packages, re-signed them, and re-uploaded them to the same location on do

Re: [ceph-users] Issues with CentOS RDO Liberty (OpenStack) and Ceph Repo (dependency resolution failed)

2015-11-10 Thread Ken Dreyer
On Fri, Nov 6, 2015 at 4:05 PM, c...@dolphin-it.de wrote: > Error: Package: 1:cups-client-1.6.3-17.el7.x86_64 (core-0) >Requires: cups-libs(x86-64) = 1:1.6.3-17.el7 >Installed: 1:cups-libs-1.6.3-17.el7_1.1.x86_64 (@updates) >cups-libs(x86-64) = 1:1.6.3-17.el

[ceph-users] rollback fail?

2015-11-10 Thread wah peng
Hello, I follow these steps trying to rollback a snapshot but it seems fail, the files are lost. root@ceph3:/mnt/ceph-block-device# rbd snap create rbd/bar@snap1 root@ceph3:/mnt/ceph-block-device# rbd snap ls rbd/bar SNAPID NAME SIZE 2 snap1 10240 MB root@ceph3:/mnt/ceph-block-device

Re: [ceph-users] SHA1 wrt hammer release and tag v0.94.3

2015-11-10 Thread Ken Dreyer
On Fri, Oct 30, 2015 at 7:20 PM, Artie Ziff wrote: > I'm looking forward to learning... > Why the different SHA1 values in two places that reference v0.94.3? 95cefea9fd9ab740263bf8bb4796fd864d9afe2b is the commit where we bump the version number in the debian packaging. b2503b0e15c0b13f480f083506

Re: [ceph-users] No Presto metadata available for Ceph-noarch ceph-release-1-1.el7.noarch.rp FAILED

2015-11-10 Thread Ken Dreyer
On Fri, Oct 30, 2015 at 2:02 AM, Andrey Shevel wrote: > ceph-release-1-1.el7.noarch.rp FAILED > http://download.ceph.com/rpm-giant/el7/noarch/ceph-release-1-1.el7.noarch.rpm: > [Errno 14] HTTP Error 404 - Not Found > ] 0.0 B/s |0 B --:--:-- ETA > Trying other mirror. > > > Error down

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-10 Thread Christian Balzer
Hello, On Tue, 10 Nov 2015 13:29:31 +0300 Mike Almateia wrote: > Hello. > > For our CCTV storing streams project we decided to use Ceph cluster with > EC pool. > Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, > 30 day storing, > 99% write operations, a cluster must ha

Re: [ceph-users] Chown in Parallel

2015-11-10 Thread Logan V.
Thanks for sharing this. I modified it slightly to stop and start the OSDs on the fly rather than having all osds needlessly stopped during the chown. ie. chown ceph:ceph /var/lib/ceph /var/lib/ceph/* && find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 -I '{}' bash -c 'echo

[ceph-users] Performance issues on small cluster

2015-11-10 Thread Ben Town
Hi Guys, I'm in the process of configuring a ceph cluster and am getting some less than ideal performance and need some help figuring it out! This cluster will only really be used for backup storage for Veeam so I don't need a crazy amount of I/O but good sequential writes would be ideal. At t

Re: [ceph-users] Performance issues on small cluster

2015-11-10 Thread Timofey Titovets
On small cluster i've get a great sequental perfomance by using btrfs on OSD, journal file (max sync interval ~180s) and with option filestore journal parallel = true 2015-11-11 10:12 GMT+03:00 Ben Town : > Hi Guys, > > > > I’m in the process of configuring a ceph cluster and am getting some less

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-10 Thread Mike
10.11.2015 19:40, Paul Evans пишет: > Mike - unless things have changed in the latest versions(s) of Ceph, I *not* > believe CRUSH will be successful in creating a valid PG map if the ’n' value > is 10 (k+m), your host count is 6, and your failure domain is set to host. > You’ll need to increas

Re: [ceph-users] Permanent MDS restarting under load

2015-11-10 Thread Oleksandr Natalenko
10.11.2015 22:38, Gregory Farnum wrote: Which requests are they? Are these MDS operations or OSD ones? Those requests appeared in ceph -w output and are the follows: https://gist.github.com/5045336f6fb7d532138f Is that correct that there are OSD operations blocked? osd.3 is one of data poo

Re: [ceph-users] Performance issues on small cluster

2015-11-10 Thread Christian Balzer
Hello, On Wed, 11 Nov 2015 07:12:56 + Ben Town wrote: > Hi Guys, > > I'm in the process of configuring a ceph cluster and am getting some > less than ideal performance and need some help figuring it out! > > This cluster will only really be used for backup storage for Veeam so I > don't ne