Re: [ceph-users] Ceph file system is not freeing space

2015-11-11 Thread Eric Eastman
On Wed, Nov 11, 2015 at 4:19 PM, John Spray wrote: > > Eric: for the ticket, can you also gather an MDS log (with debug mds = > 20) from the point where the MDS starts up until the point where it > has been active for a few seconds? The strays are evaluated for > purging during startup, so there

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-11 Thread Mike Axford
On 10 November 2015 at 10:29, Mike Almateia wrote: > Hello. > > For our CCTV storing streams project we decided to use Ceph cluster with EC > pool. > Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30 > day storing, > 99% write operations, a cluster must has grow up with ou

Re: [ceph-users] Ceph file system is not freeing space

2015-11-11 Thread John Spray
On Wed, Nov 11, 2015 at 10:42 PM, Gregory Farnum wrote: >> If you or someone else is interested, the whole cache file can be down >> loaded at: >> >> wget ftp://ftp.keepertech.com/outgoing/eric/dumpcache.txt.bz2 >> >> It is about 1.8 MB uncompressed. >> >> I know that snapshots are not being regul

Re: [ceph-users] Ceph file system is not freeing space

2015-11-11 Thread Gregory Farnum
On Wed, Nov 11, 2015 at 2:28 PM, Eric Eastman wrote: > On Wed, Nov 11, 2015 at 11:09 AM, John Spray wrote: >> On Wed, Nov 11, 2015 at 5:39 PM, Eric Eastman >> wrote: >>> I am trying to figure out why my Ceph file system is not freeing >>> space. Using Ceph 9.1.0 I created a file system with sna

Re: [ceph-users] Ceph file system is not freeing space

2015-11-11 Thread Eric Eastman
On Wed, Nov 11, 2015 at 11:09 AM, John Spray wrote: > On Wed, Nov 11, 2015 at 5:39 PM, Eric Eastman > wrote: >> I am trying to figure out why my Ceph file system is not freeing >> space. Using Ceph 9.1.0 I created a file system with snapshots >> enabled, filled up the file system over days while

[ceph-users] Operating System Upgrade

2015-11-11 Thread Joe Ryner
Hi Everyone, I am upgrading the nodes of my ceph cluster from centos 6.6 to centos 7.1. My cluster is currently running Firefly 0.80.10. I would like to not reformat the OSD Drives and just remount them under Centos 7.1. Is this supported? Should it work? When we upgrade to Centos 7.1 we wi

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-11 Thread David Clarke
On 12/11/15 09:37, Gregory Farnum wrote: > On Wednesday, November 11, 2015, Wido den Hollander > wrote: > > On 11/10/2015 09:49 PM, Vickey Singh wrote: > > On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander > wrote: > > > >> On 11/09/2015 05:27 PM, Vicke

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-11 Thread Gregory Farnum
On Wednesday, November 11, 2015, Wido den Hollander wrote: > On 11/10/2015 09:49 PM, Vickey Singh wrote: > > On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander > wrote: > > > >> On 11/09/2015 05:27 PM, Vickey Singh wrote: > >>> Hello Ceph Geeks > >>> > >>> Need your comments with my understandin

Re: [ceph-users] Ceph file system is not freeing space

2015-11-11 Thread John Spray
On Wed, Nov 11, 2015 at 5:39 PM, Eric Eastman wrote: > I am trying to figure out why my Ceph file system is not freeing > space. Using Ceph 9.1.0 I created a file system with snapshots > enabled, filled up the file system over days while taking snapshots > hourly. I then deleted all files and al

[ceph-users] Ceph file system is not freeing space

2015-11-11 Thread Eric Eastman
I am trying to figure out why my Ceph file system is not freeing space. Using Ceph 9.1.0 I created a file system with snapshots enabled, filled up the file system over days while taking snapshots hourly. I then deleted all files and all snapshots, but Ceph is not returning the space. I left the c

[ceph-users] data balancing/crush map issue

2015-11-11 Thread James Eckersall
Hi, I have a Ceph cluster running on 0.80.10 and I'm having problems with the data balancing on two new nodes that were recently added. The cluster nodes look like as follows: 6x OSD servers with 32 4TB SAS drives. The drives are configured with RAID0 in pairs, so 16 8TB OSD's per node. New ser

Re: [ceph-users] cephfs: Client hp-s3-r4-compute failingtorespondtocapabilityrelease

2015-11-11 Thread Burkhard Linke
Hi, On 11/10/2015 09:20 PM, Gregory Farnum wrote: Can you dump the metadata ops in flight on each ceph-fuse when it hangs? ceph daemon mds_requests Current state: host A and host B blocked, both running ceph-fuse 0.94.5 (trusty package) hostA mds_requests (client id 1265369): { "reque

Re: [ceph-users] raid0 and ceph?

2015-11-11 Thread John Spray
On Wed, Nov 11, 2015 at 9:54 AM, Marius Vaitiekunas wrote: > Hi, > > We use firefly 0.80.9. > > We have some ceph nodes in our cluster configured to use raid0. The node > configuration looks like this: > > 2xHDD - RAID1 - /dev/sda - OS > 1xSSD - RAID0 - /dev/sdb - ceph journaling disk, usually

[ceph-users] Radosgw broken files

2015-11-11 Thread George Mihaiescu
Hi, I have created a bug report for an issue affecting our Ceph Hammer environment, and I was wondering if anybody has some input on what can we do to troubleshoot/fix it: http://tracker.ceph.com/issues/13764 Thank you, George ___ ceph-users mailing li

[ceph-users] Number of buckets per user

2015-11-11 Thread Daniel Schneller
Hi! Maybe I am missing something obvious, but is there no way to quickly tell how many buckets an RGW user has? I can see the max_buckets limit in radosgw-admin user info --uid=x, but nothing about how much of that limit has been used. To be clear: I do not care what they are called, or what

[ceph-users] Federated gateways sync error - Too many open files

2015-11-11 Thread WD_Hwang
Hi, I am testing federated gateways asynchronization with one region and two zones. After several files synchronized completed, I found some error messages was appeared in the log file. It told me that 'Too many open files' when connected to secondary zone. I have modified the parameters of sy

[ceph-users] raid0 and ceph?

2015-11-11 Thread Marius Vaitiekunas
Hi, We use firefly 0.80.9. We have some ceph nodes in our cluster configured to use raid0. The node configuration looks like this: 2xHDD - RAID1 - /dev/sda - OS 1xSSD - RAID0 - /dev/sdb - ceph journaling disk, usually one for four data disks 1xHDD - RAID0 - /dev/sdc - ceph data disk 1xHDD

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-11 Thread Mike
11.11.2015 06:14, Christian Balzer пишет: > > Hello, > > On Tue, 10 Nov 2015 13:29:31 +0300 Mike Almateia wrote: > >> Hello. >> >> For our CCTV storing streams project we decided to use Ceph cluster with >> EC pool. >> Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, >>

[ceph-users] Not equally spreaded usage on across the two storage hosts.

2015-11-11 Thread Dimitar Boichev
Hello all, First time poster to ceph-users here.:) Ceph version is: ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3) Can someone tell me when I have this replication rule: ruleset 0 type replicated min_size 1 max_size 10 step take default

Re: [ceph-users] Problem with infernalis el7 package

2015-11-11 Thread Stijn De Weirdt
did you recreate new rpms with same version/release? it would be better to make new rpms with different release (e.g. 9.2.0-1). we have snapshotted mirrors and nginx caches between ceph yum repo and the nodes that install the rpms, so cleaning the cache locally will not help. stijn On 11/11/2015

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-11 Thread Wido den Hollander
On 11/10/2015 09:49 PM, Vickey Singh wrote: > On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander wrote: > >> On 11/09/2015 05:27 PM, Vickey Singh wrote: >>> Hello Ceph Geeks >>> >>> Need your comments with my understanding on straw2. >>> >>>- Is Straw2 better than straw ? >> >> It is not pers

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-11 Thread Mike
10.11.2015 19:40, Paul Evans пишет: > Mike - unless things have changed in the latest versions(s) of Ceph, I *not* > believe CRUSH will be successful in creating a valid PG map if the ’n' value > is 10 (k+m), your host count is 6, and your failure domain is set to host. > You’ll need to increas