Re: [ceph-users] [cephfs][ceph-fuse] cache size or memory leak?

2015-04-29 Thread Dexter Xiong
The output of status command of fuse daemon: "dentry_count": 128966, "dentry_pinned_count": 128965, "inode_count": 409696, I saw the pinned dentry is nearly the same as dentry. So I enabled debug log(debug client = 20/20) and read Client.cc source code in general. I found that an entry will not b

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-29 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Dominik Hannen > Sent: 29 April 2015 00:30 > To: Nick Fisk > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Cost- and Powerefficient OSD-Nodes > > > It's all about the total lat

Re: [ceph-users] Ceph is Full

2015-04-29 Thread Sebastien Han
With mon_osd_full_ratio you should restart the monitors and this should’t be a problem. For the unclean PG, looks like something is preventing them to be healthy, look at the state of the OSD responsible for these 2 PGs. > On 29 Apr 2015, at 05:06, Ray Sun wrote: > > mon osd full ratio Chee

[ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-29 Thread Wido den Hollander
Hi, While working with some CentOS machines I found out that Libvirt currently is not build with RBD storage pool support. While that support has been upstream for a very long time and enabled in Ubuntu as well I was wondering if anybody knew why it isn't enabled on CentOS? Under CentOS 7.1 my l

[ceph-users] Cache Pool PG Split

2015-04-29 Thread Nick Fisk
Hi All, When trying to increase the number of PG's of a cache pool I get the warning message about running a scrub afterwards and being careful about not overfilling the pool. I've also looked at this issue to better understand the underlying cause: http://tracker.ceph.com/issues/8043 Am I best t

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-29 Thread Dominik Hannen
- Ursprüngliche Mail - > Von: "Nick Fisk" > An: "Dominik Hannen" > CC: ceph-users@lists.ceph.com > Gesendet: Mittwoch, 29. April 2015 11:32:18 > Betreff: RE: Cost- and Powerefficient OSD-Nodes >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On

[ceph-users] A pesky unfound object

2015-04-29 Thread Eino Tuominen
Hello, A routine reboot of one of the osd servers resulted in one unfound object. Following the documentation on unfound objects I have run ceph pg 5.306 mark_unfound_lost delete But, I've still got: # ceph health detail HEALTH_WARN recovery 1/2661869 unfound (0.000%) recovery 1/2661869 unfoun

[ceph-users] Change osd nearfull and full ratio of a running cluster

2015-04-29 Thread Stefan Priebe - Profihost AG
Hi, how can i change the osd full and osd nearfull ratio of a running cluster? Just setting: mon osd full ratio = .97 mon osd nearfull ratio = .92 has no effect. Stefan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists

Re: [ceph-users] Use object-map Feature on existing rbd images ?

2015-04-29 Thread Jason Dillaman
Unfortunately, you won't be able to use the new "rbd feature enable" command against a Hammer OSD since the command requires support within the RBD object class in the OSD. Additionally, since your images haven't had the object map enabled, they would need to have an object map built prior to t

Re: [ceph-users] [cephfs][ceph-fuse] cache size or memory leak?

2015-04-29 Thread John Spray
On 29/04/2015 09:33, Dexter Xiong wrote: The output of status command of fuse daemon: "dentry_count": 128966, "dentry_pinned_count": 128965, "inode_count": 409696, I saw the pinned dentry is nearly the same as dentry. So I enabled debug log(debug client = 20/20) and read Client.cc source code

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-29 Thread Robert LeBlanc
We have had to build our own QEMU. On Wed, Apr 29, 2015 at 4:34 AM, Wido den Hollander wrote: > Hi, > > While working with some CentOS machines I found out that Libvirt > currently is not build with RBD storage pool support. > > While that support has been upstream for a very long time and enabl

Re: [ceph-users] Change osd nearfull and full ratio of a running cluster

2015-04-29 Thread Robert LeBlanc
ceph tell mon.* injectargs "--mon_osd_full_ratio .97" ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .92" On Wed, Apr 29, 2015 at 7:38 AM, Stefan Priebe - Profihost AG < s.pri...@profihost.ag> wrote: > Hi, > > how can i change the osd full and osd nearfull ratio of a running cluster? > > Ju

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-29 Thread Wido den Hollander
On 04/29/2015 03:45 PM, Robert LeBlanc wrote: > We have had to build our own QEMU. > No, under CentOS 7.1 that's not the problem, but it's just libvirt which doesn't have RBD storage pool support enabled. I wrote a quick blog about it: http://blog.widodh.nl/2015/04/rebuilding-libvirt-under-cento

Re: [ceph-users] [cephfs][ceph-fuse] cache size or memory leak?

2015-04-29 Thread Gregory Farnum
On Wed, Apr 29, 2015 at 1:33 AM, Dexter Xiong wrote: > The output of status command of fuse daemon: > "dentry_count": 128966, > "dentry_pinned_count": 128965, > "inode_count": 409696, > I saw the pinned dentry is nearly the same as dentry. > So I enabled debug log(debug client = 20/20) and read

[ceph-users] can't delete buckets in radosgw after i recreated the radosgw pools

2015-04-29 Thread Makkelie, R (ITCDCC) - KLM
i first had some major disaster i had 12 incomplete pgs that couldn't be fixed. (due to several harddisk failures at once) alls these incomplete pgs where all in the ".rgw" and ".rgw.buckets" pools so the only option i could think of is to take my loses and delete and recreate those pools. the

Re: [ceph-users] Upgrade from Giant to Hammer and after some basic operations most of the OSD's went down

2015-04-29 Thread Sage Weil
On Wed, 29 Apr 2015, Tuomas Juntunen wrote: > Hi > > I updated that version and it seems that something did happen, the osd's > stayed up for a while and 'ceph status' got updated. But then in couple of > minutes, they all went down the same way. > > I have attached new 'ceph osd dump -f json-pre

[ceph-users] recommended version for Debian Jessie

2015-04-29 Thread Fabrice Aeschbacher
Hi, We plan to use Ceph in production on Debian Jessie. There are two possibilities: - use the packages from Debian archive (=> 0.80.7) - use the packages from Ceph archive (http://ceph.com/debian-firefly) Basically, I would rather tend to prefer using only the Debian archive. What is your exper

[ceph-users] unsubscribe ceph-users

2015-04-29 Thread Harald Rößler
unsubscribe ceph-users? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-29 Thread Scott Laird
FWIW, I tried using some 256G MX100s with ceph and had horrible performance issues within a month or two. I was seeing 100% utilization with high latency but only 20 MB/s writes. I had a number of S3500s in the same pool that were dramatically better. Which is to say that they were actually fast

Re: [ceph-users] Cache Pool PG Split

2015-04-29 Thread Sage Weil
On Wed, 29 Apr 2015, Nick Fisk wrote: > Hi All, > > When trying to increase the number of PG's of a cache pool I get the warning > message about running a scrub afterwards and being careful about not > overfilling the pool. I've also looked at this issue to better understand > the underlying cause

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-29 Thread Dominik Hannen
> FWIW, I tried using some 256G MX100s with ceph and had horrible performance > issues within a month or two. I was seeing 100% utilization with high > latency but only 20 MB/s writes. I had a number of S3500s in the same pool > that were dramatically better. Which is to say that they were actua

Re: [ceph-users] Cannot remove cache pool used by CephFS

2015-04-29 Thread John Spray
On 29/04/2015 03:56, CY Chang wrote: I set up a cache pool for data pool used in CephFS. When I tried to remove the cache pool, I got this error: pool 'XXX' is in use by CephFS via its tier. So, my question is: why is it forbidden to remove tiers from a base pool in use by CephFS? How about the

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-29 Thread Lionel Bouton
Hi Dominik, On 04/29/15 19:06, Dominik Hannen wrote: > I had planned to use at maximum 80GB of the available 250GB. > 1 x 16GB OS > 4 x 8, 12 or 16GB partitions for osd-journals. > > For a total SSD Usage of 19.2%, 25.6% or 32% > and over-provisioning of 80.8%, 74.3% or 68%. > > I am relatively ce

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-29 Thread Robert LeBlanc
The only way I know to actually extend the reserved space it using the method described here: https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm On Wed, Apr 29, 2015 at 12:12 PM, Lionel Bouton wrote: > Hi Dominik, > > On 04/29/15 19:06, Dominik Hannen wrote: >> I had plann

Re: [ceph-users] Possible improvements for a slow write speed (excluding independent SSD journals)

2015-04-29 Thread Anthony Levesque
We redid the test with 4MB Block Size (using the same command as before but with 4MB for the BS) and we are getting better result from all devices: Intel DC S3500 120GB = 148 MB/s Samsung Pro 128GB = 187 MB/s Intel 520 120GB = 154 MB/s Samsung EVO 1TB =

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-29 Thread Wido den Hollander
On 04/29/2015 12:34 PM, Wido den Hollander wrote: > Hi, > > While working with some CentOS machines I found out that Libvirt > currently is not build with RBD storage pool support. > > While that support has been upstream for a very long time and enabled in > Ubuntu as well I was wondering if any

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-29 Thread Somnath Roy
Wido, Is this true for RHEL as well then ? Thanks & Regards Somnath -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido den Hollander Sent: Wednesday, April 29, 2015 12:22 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] RBD storage

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-29 Thread Wido den Hollander
On 04/29/2015 09:24 PM, Somnath Roy wrote: > Wido, > Is this true for RHEL as well then ? > I think so. The spec file says FC only for RBD: %if 0%{?fedora} >= 16 %define with_storage_rbd While for gluster for example: %if 0%{?fedora} >= 19 || 0%{?rhel} >= 6 %define with_storage_gluster

Re: [ceph-users] can't delete buckets in radosgw after i recreated the radosgw pools

2015-04-29 Thread Colin Corr
On 04/29/2015 07:55 AM, Makkelie, R (ITCDCC) - KLM wrote: > i first had some major disaster i had 12 incomplete pgs that couldn't be > fixed. (due to several harddisk failures at once) > alls these incomplete pgs where all in the ".rgw" and ".rgw.buckets" pools > > so the only option i could th

[ceph-users] Kicking 'Remapped' PGs

2015-04-29 Thread Paul Evans
In one of our clusters we sometimes end up with PGs that are mapped incorrectly and settle into a ‘remapped’ state (forever). Is there a way to nudge a specific PG to recalculate placement and relocate the data? One option that we’re *dangerously* unclear about is the use of ceph pg force_crea

[ceph-users] basic questions about Ceph

2015-04-29 Thread Liu, Ming (HPIT-GADSC)
Hello, I have a dumb question about Ceph, hope someone can help me. I learned that on the bottom layer, there is a RADOS, which is an object storage layer. So as for me, it is basically an interface that one can save a key/value object in each write operation. And each object will finally map to

Re: [ceph-users] Possible improvements for a slow write speed (excluding independent SSD journals)

2015-04-29 Thread Christian Balzer
Hello, On Wed, 29 Apr 2015 15:01:49 -0400 Anthony Levesque wrote: > We redid the test with 4MB Block Size (using the same command as before > but with 4MB for the BS) and we are getting better result from all > devices: > That's to be expected of course. > Intel DC S3500 120GB =

Re: [ceph-users] Upgrade from Giant to Hammer and after some basic operations most of the OSD's went down

2015-04-29 Thread tuomas . juntunen
Hey Yes I can drop the images data, you think this will fix it? Br, Tuomas > On Wed, 29 Apr 2015, Tuomas Juntunen wrote: >> Hi >> >> I updated that version and it seems that something did happen, the osd's >> stayed up for a while and 'ceph status' got updated. But then in couple of >> minutes

Re: [ceph-users] basic questions about Ceph

2015-04-29 Thread Liu, Ming (HPIT-GADSC)
Hello, again, By talking about 'write an object', I mean by invoking the librados. Not via RDB or GW or CephFS. I want to understand how RADOS handle objects. Thanks Paul to remind me to refine my question. Thanks, Ming From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of L

[ceph-users] about rgw region sync

2015-04-29 Thread TERRY
hi: I am using the following script to setup my cluster. I upgrade my radosgw-agent from version 1.2.0 to 1.2.2-1. (1.2.0 will results a error!) cat repeat.sh #!/bin/bash set -e set -x #1 create pools sudo ./create_pools.sh #2 create a keyring sudo ceph-authtool --create-keyring /etc/

[ceph-users] ???????????? about rgw region and zone

2015-04-29 Thread TERRY
-- -- ??: "316828252";<316828...@qq.com>; : 2015??4??29??(??) 9:21 ??: "Karan Singh"; : ?? [ceph-users] about rgw region and zone the detail information I get as follow?? 2015-04-29T16:17:55.090 32311:INFO:radosg

[ceph-users] ???????????? about rgw region and zone

2015-04-29 Thread TERRY
-- -- ??: "316828252";<316828...@qq.com>; : 2015??4??29??(??) 3:45 ??: "Karan Singh"; : ?? [ceph-users] about rgw region and zone I build the environment by executing the follow bash script: #!/bin/bash set -e

[ceph-users] ???????????? about rgw region and zone

2015-04-29 Thread TERRY
-- -- ??: "316828252";<316828...@qq.com>; : 2015??4??28??(??) 3:32 ??: "Karan Singh"; : ?? [ceph-users] about rgw region and zone Hi?? karan?? singh. First of all thank you so much for replying and givi

Re: [ceph-users] [cephfs][ceph-fuse] cache size or memory leak?

2015-04-29 Thread Yan, Zheng
On Wed, Apr 29, 2015 at 4:33 PM, Dexter Xiong wrote: > The output of status command of fuse daemon: > "dentry_count": 128966, > "dentry_pinned_count": 128965, > "inode_count": 409696, > I saw the pinned dentry is nearly the same as dentry. > So I enabled debug log(debug client = 20/20) and read

[ceph-users] Can not access the Ceph's main page ceph.com intermittently

2015-04-29 Thread 黄文俊
Hi, Sage This is Wenjun Huang from Beijing China, I found that I can not access ceph's main site *ceph.com * intermittently. The issue looks so strange, always I can access the site normally at a time, but I can not access it some seconds later(the site does not response for a ver