[ceph-users] question about RGW

2014-09-09 Thread baijia...@126.com
when I read RGW code, and can't understand master_ver inside struct rgw_bucket_dir_header . who can explain this struct , in especial master_ver and stats , thanks baijia...@126.com___ ceph-users mailing list ceph-users@lists.ceph.com http://list

Re: [ceph-users] SSD journal deployment experiences

2014-09-09 Thread Christian Balzer
On Tue, 9 Sep 2014 10:57:26 -0700 Craig Lewis wrote: > On Sat, Sep 6, 2014 at 9:27 AM, Christian Balzer wrote: > > > On Sat, 06 Sep 2014 16:06:56 + Scott Laird wrote: > > > > > Backing up slightly, have you considered RAID 5 over your SSDs? > > > Practically speaking, there's no performance

[ceph-users] Ceph-deploy bug; CentOS 7, Firefly

2014-09-09 Thread Piers Dawson-Damer
Ceph-deploy wants; ceph-release-1-0.el7.noarch.rpm But the contents of ceph.com/rpm-firefly/el7/noarch only include the file; ceph-release-1-0.el7.centos.noarch.rpm Piers [stor][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el7/noarch/ceph-release-1-0.el7.noarch.rpm [stor][

Re: [ceph-users] Problem with customized crush rule for EC pool

2014-09-09 Thread Lei Dong
Yes, My goal is to make it loosing 3 OSD does not lose data. My 6 racks may not be in different rooms but they use 6 different switches, so I want when any switch is down or unreachable, my data can still be accessed. I think it’s not an unrealistic requirement. Thanks! LeiDong. On 9/9/14, 10:

[ceph-users] Best practices on Filesystem recovery on RBD block volume?

2014-09-09 Thread Keith Phua
Dear ceph-users, Recently we had an encounter of a XFS filesystem corruption on a NAS box. After repairing the filesystem, we discover the files were gone. This trigger some questions with regards to filesystem on RBD block which I hope the community can enlighten me. 1. If a local filesyst

Re: [ceph-users] osd unexpected error by leveldb

2014-09-09 Thread Haomai Wang
please show your ceph version. There exists some known bugs on Firefly On Fri, Sep 5, 2014 at 9:12 AM, derek <908429...@qq.com> wrote: > Dear CEPH , > Urgent question, I met a "FAILED assert(0 == "unexpected error")" > yesterday > , Now i have not way to start this OSDS > I have attach

Re: [ceph-users] CephFS roadmap (was Re: NAS on RBD)

2014-09-09 Thread Blair Bethwaite
Hi Sage, Thanks for weighing into this directly and allaying some concerns. It would be good to get a better understanding about where the rough edges are - if deployers have some knowledge of those then they can be worked around to some extent. E.g., for our use-case it may be that whilst Inktan

Re: [ceph-users] ceph data consistency

2014-09-09 Thread Chen, Xiaoxi
Yes, but usually a system has several layer of error-detecting/recovering stuff in different granularity. Disk CRC works on Sector level, Ceph CRC mostly work on object level, and we also have replication/erasure coding in system level. The CRC in ceph mainly handle the case, imaging you have a

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Quenten Grasso
We have been using the NFS/Pacemaker/RBD Method for a while explains it a bit better here, http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/ PS: Thanks Sebastien, Our use case is vmware storage, So as I mentioned we've been running it for some time and we've had pretty mixed results. P

[ceph-users] OpTracker optimization

2014-09-09 Thread Somnath Roy
Hi Sam/Sage, As we discussed earlier, enabling the present OpTracker code degrading performance severely. For example, in my setup a single OSD node with 10 clients is reaching ~103K read iops with io served from memory while optracking is disabled but enabling optracker it is reduced to ~39K io

Re: [ceph-users] max_bucket limit -- safe to disable?

2014-09-09 Thread Gregory Farnum
On Tue, Sep 9, 2014 at 9:11 AM, Daniel Schneller wrote: > Hi list! > > Under > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-September/033670.html > I found a situation not unlike ours, but unfortunately either > the list archive fails me or the discussion ended without a > conclusion,

Re: [ceph-users] Remaped osd at remote restart

2014-09-09 Thread Gregory Farnum
On Mon, Sep 8, 2014 at 6:33 AM, Eduard Kormann wrote: > Hello, > > have I missed something or is it a feature: When I restart a osd on the > belonging server so it restarts normally: > > root@cephosd10:~# service ceph restart osd.76 > === osd.76 === > === osd.76 === > Stopping Ceph osd.76 on cepho

Re: [ceph-users] Ceph + Postfix/Zimbra

2014-09-09 Thread Patrick McGarry
Hey Oscar, Sorry for the delay on this, it looks like my reply got stuck in the outbox. I am moving this over to Ceph-User for discussion as the community will probably have more experience and opinions to offer than just the couple of us community guys. Let me know if you don't get what you nee

Re: [ceph-users] SSD journal deployment experiences

2014-09-09 Thread Craig Lewis
On Sat, Sep 6, 2014 at 9:27 AM, Christian Balzer wrote: > On Sat, 06 Sep 2014 16:06:56 + Scott Laird wrote: > > > Backing up slightly, have you considered RAID 5 over your SSDs? > > Practically speaking, there's no performance downside to RAID 5 when > > your devices aren't IOPS-bound. > > >

Re: [ceph-users] SSD journal deployment experiences

2014-09-09 Thread Craig Lewis
On Sat, Sep 6, 2014 at 7:50 AM, Dan van der Ster wrote: > > BTW, do you happen to know, _if_ we re-use an OSD after the journal has > failed, are any object inconsistencies going to be found by a > scrub/deep-scrub? > I haven't tested this, but I did something I *think* is similar. I deleted an

Re: [ceph-users] ceph data consistency

2014-09-09 Thread Sage Weil
On Thu, 4 Sep 2014, wrote: > > hi, guys: >   >   when I read the filestore.cc, I find the ceph use crc the check the data. > Why should check the data? > >   In my knowledge,  the disk has error-correcting code (ECC) for each > sector. Looking at wiki: http://en.wikipedia.org/wiki/Disk_s

[ceph-users] CephFS roadmap (was Re: NAS on RBD)

2014-09-09 Thread Sage Weil
On Tue, 9 Sep 2014, Blair Bethwaite wrote: > > Personally, I think you?re very brave to consider running 2PB of ZoL > > on RBD. If I were you I would seriously evaluate the CephFS option. It > > used to be on the roadmap for ICE 2.0 coming out this fall, though I > > noticed its not there anymor

Re: [ceph-users] Ceph Filesystem - Production?

2014-09-09 Thread James Devine
The issue isn't so much mounting the ceph client as it is the mounted ceph client becoming unusable requiring a remount. So far so good though. On Fri, Sep 5, 2014 at 5:53 PM, JIten Shah wrote: > We ran into the same issue where we could not mount the filesystem on the > clients because it had

Re: [ceph-users] ceph data consistency

2014-09-09 Thread Christian Balzer
On Thu, 4 Sep 2014 16:31:12 +0800 池信泽 wrote: > hi, everyone: > > when I read the filestore.cc, I find the ceph use crc the check the > data. Why should check the data? > It should do even more, it should also do checksums for all replicas: > In my knowledge, the disk has error-correcting c

[ceph-users] ceph data consistency

2014-09-09 Thread ????????
hi, guys: when I read the filestore.cc, I find the ceph use crc the check the data. Why should check the data? In my knowledge, the disk has error-correcting code (ECC) for each sector. Looking at wiki: http://en.wikipedia.org/wiki/Disk_sector, "In disk drives, each physical sector is

[ceph-users] Remaped osd at remote restart

2014-09-09 Thread Eduard Kormann
Hello, have I missed something or is it a feature: When I restart a osd on the belonging server so it restarts normally: root@cephosd10:~# service ceph restart osd.76 === osd.76 === === osd.76 === Stopping Ceph osd.76 on cephosd10...kill 799176...done === osd.76 === create-or-move updating ite

[ceph-users] max_bucket limit -- safe to disable?

2014-09-09 Thread Daniel Schneller
Hi list! Under http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-September/033670.html I found a situation not unlike ours, but unfortunately either the list archive fails me or the discussion ended without a conclusion, so I dare to ask again :) We currently have a setup of 4 servers

Re: [ceph-users] [ceph-calamari] RFC: A preliminary Chinese version of Calamari

2014-09-09 Thread Gregory Meno
Li Wang, Thank you for doing this! Would you please change this so that calamari will display the correct locale? As it exists your branch will not display an all english version. To merge upstream I would expect this to work so that both en and zh_CN versions could work. I made a branch that

[ceph-users] number of PGs

2014-09-09 Thread Luis Periquito
I was reading on the number of PGs we should have for a cluster, and I found the formula to place 100 PGs in each OSD ( http://ceph.com/docs/master/rados/operations/placement-groups/). Now this formula has generated some discussion as to how many PGs we should have in each pool. Currently our mai

[ceph-users] ceph data consistency

2014-09-09 Thread 池信泽
hi, everyone: when I read the filestore.cc, I find the ceph use crc the check the data. Why should check the data? In my knowledge, the disk has error-correcting code (ECC) for each sector. Looking at wiki: http://en.wikipedia.org/wiki/Dis

Re: [ceph-users] [Ceph-community] ceph replication and striping

2014-09-09 Thread m.channappa.negalur
Hello Aaron Thanks for your answers!! If my understanding is correct then , By default ceph supports data replication and striping and striping doesn’t requires any separate configuration . Please correct me if I am wrong. From: Aaron Ten Clay [mailto:aaro...@aarontc.com] Sent: Wednesda

[ceph-users] one stuck pg

2014-09-09 Thread Erwin Lubbers
Hi, My cluster is giving one stuck pg which seems to be backfilling for days now. Any suggestions on how to solve it? HEALTH_WARN 1 pgs backfilling; 1 pgs stuck unclean; recovery 32/5989217 degraded (0.001%) pg 206.3f is stuck unclean for 294420.424122, current state active+remapped+backfillin

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Dan Van Der Ster
> On 09 Sep 2014, at 16:39, Michal Kozanecki wrote: > On 9 September 2014 08:47, Blair Bethwaite wrote: >> On 9 September 2014 20:12, Dan Van Der Ster >> wrote: >>> One thing I’m not comfortable with is the idea of ZFS checking the data in >>> addition to Ceph. Sure, ZFS will tell us if there

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Michal Kozanecki
Hi Blair! On 9 September 2014 08:47, Blair Bethwaite wrote: > Hi Dan, > > Thanks for sharing! > > On 9 September 2014 20:12, Dan Van Der Ster wrote: >> We do this for some small scale NAS use-cases, with ZFS running in a VM with >> rbd volumes. The performance is not great (especially since we

[ceph-users] question about librbd io

2014-09-09 Thread yuelongguang
hi, josh.durgin: i want to know how librbd launch io request. use case: inside vm, i use fio to test rbd-disk's io performance. fio's pramaters are bs=4k, direct io, qemu cache=none. in this case, if librbd just send what it gets from vm, i mean no gather/scatter. the rate , io inside vm : i

Re: [ceph-users] Problem with customized crush rule for EC pool

2014-09-09 Thread Loic Dachary
On 09/09/2014 14:21, Lei Dong wrote: > Thanks loic! > > Actually I've found that increase choose_local_fallback_tries can > help(chooseleaf_tries helps not so significantly), but I'm afraid when osd > failure happen and need to find new acting set, it may be fail to find enough > racks again.

Re: [ceph-users] Ceph on RHEL 7 with multiple OSD's

2014-09-09 Thread Marco Garcês
Actually in EL7, iptables does not come installed by default, they use firewalld... just remove firewalld and install iptables, and you are back in the game! Or learn firewalld, that will work to! :) *Marco Garcês* *#sysadmin* Maputo - Mozambique *[Phone]* +258 84 4105579 *[Skype]* marcogarces O

Re: [ceph-users] Ceph on RHEL 7 with multiple OSD's

2014-09-09 Thread Michal Kozanecki
Network issue maybe? Have you checked your firewall settings? Iptables changed a bit in EL7 and might of broken any rules your normally try and use, try flushing the rules (iptables -F) and see if that fixes things, if you then you'll need to fix your firewall rules. I ran into a similar issue

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Blair Bethwaite
Hi Dan, Thanks for sharing! On 9 September 2014 20:12, Dan Van Der Ster wrote: > We do this for some small scale NAS use-cases, with ZFS running in a VM with > rbd volumes. The performance is not great (especially since we throttle the > IOPS of our RBD). We also tried a few kRBD / ZFS servers

Re: [ceph-users] Problem with customized crush rule for EC pool

2014-09-09 Thread Lei Dong
Thanks loic! Actually I've found that increase choose_local_fallback_tries can help(chooseleaf_tries helps not so significantly), but I'm afraid when osd failure happen and need to find new acting set, it may be fail to find enough racks again. So I'm trying to find a more guaranteed way in cas

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Blair Bethwaite
Hi Christian, On 09/09/2014 6:33 PM, "Christian Balzer" wrote: > I have nearly no experience with ZFS, but I'm wondering why you'd pool > things on the level when Ceph is already supplying a redundant and > resizeable block device. That's really subject to further testing. At this stage I'm just

Re: [ceph-users] Problem with customized crush rule for EC pool

2014-09-09 Thread Loic Dachary
Hi, It is indeed possible that mapping fails if there are just enough racks to match the constraint. And the probability of a bad mapping increases when the number of PG increases because there is a need for more mapping. You can tell crush to try harder with step set_chooseleaf_tries 10 Be

[ceph-users] Problem with customized crush rule for EC pool

2014-09-09 Thread Lei Dong
Hi ceph users: I want to create a customized crush rule for my EC pool (with replica_size = 11) to distribute replicas into 6 different Racks. I use the following rule at first: Step take default // root Step choose firstn 6 type rack // 6 racks, I have and only have 6 racks Step chooseleaf i

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Dan Van Der Ster
Hi Blair, > On 09 Sep 2014, at 09:05, Blair Bethwaite wrote: > > Hi folks, > > In lieu of a prod ready Cephfs I'm wondering what others in the user > community are doing for file-serving out of Ceph clusters (if at all)? > > We're just about to build a pretty large cluster - 2PB for file-based

Re: [ceph-users] Ceph on RHEL 7 with multiple OSD's

2014-09-09 Thread BG
Loic Dachary writes: > > Hi, > > It it looks like your osd.0 is down and you only have one osd left (osd.1) > which would explain why the cluster cannot get to a healthy state. The "size > 2" in "pool 0 'data' replicated size 2 ..." means the pool needs at > least two OSDs up to function prope

[ceph-users] 回复: Re: 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-09 Thread 廖建锋
I solved by creating new then removing old pool 发件人: 廖建锋 发送时间: 2014-09-09 17:39 收件人: haomaiwang 抄送: ceph-users; ceph-users 主题: Re: Re: [ceph-users] 回复: mix ceph verion

Re: [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-09 Thread 廖建锋
I re-installed the whole cluster with ceph 0.85 and lost all my 10T data Now i have another question is which i have not way to re-create pool. [cid:_Foxmail.1@35e9c94b-8a82-e940-b72c-fdbeeead1185] 264 => # ceph osd pool delete data data --yes-i-really-really-mean-it Error EBUSY: pool 'data' is

Re: [ceph-users] 回复: mix ceph verion with 0.80.5 and 0.85

2014-09-09 Thread Haomai Wang
Hi, Thanks for your report, I will fix it(https://github.com/ceph/ceph/pull/2429 ). Because KeyValueStore is aimed as a experiment backend, we still don't have enough test suite to cover it. On Tue, Sep 9, 2014 at 11:02 AM, 廖建锋 wrote: > Looks like it dosn't work, i noticed that 0.85 added

Re: [ceph-users] number of PGs (global vs per pool)

2014-09-09 Thread Christian Balzer
Hello, On Tue, 9 Sep 2014 09:42:13 +0100 Luis Periquito wrote: > I was reading on the number of PGs we should have for a cluster, and I > found the formula to place 100 PGs in each OSD ( > http://ceph.com/docs/master/rados/operations/placement-groups/). > > Now this formula has generated some d

Re: [ceph-users] number of PGs (global vs per pool)

2014-09-09 Thread Wido den Hollander
On 09/09/2014 10:42 AM, Luis Periquito wrote: I was reading on the number of PGs we should have for a cluster, and I found the formula to place 100 PGs in each OSD (http://ceph.com/docs/master/rados/operations/placement-groups/). Now this formula has generated some discussion as to how many PGs

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Ilya Dryomov
On Tue, Sep 9, 2014 at 12:33 PM, Christian Balzer wrote: > > Hello, > > On Tue, 9 Sep 2014 17:05:03 +1000 Blair Bethwaite wrote: > >> Hi folks, >> >> In lieu of a prod ready Cephfs I'm wondering what others in the user >> community are doing for file-serving out of Ceph clusters (if at all)? >> >>

[ceph-users] number of PGs (global vs per pool)

2014-09-09 Thread Luis Periquito
I was reading on the number of PGs we should have for a cluster, and I found the formula to place 100 PGs in each OSD ( http://ceph.com/docs/master/rados/operations/placement-groups/). Now this formula has generated some discussion as to how many PGs we should have in each pool. Currently our mai

Re: [ceph-users] NAS on RBD

2014-09-09 Thread Christian Balzer
Hello, On Tue, 9 Sep 2014 17:05:03 +1000 Blair Bethwaite wrote: > Hi folks, > > In lieu of a prod ready Cephfs I'm wondering what others in the user > community are doing for file-serving out of Ceph clusters (if at all)? > > We're just about to build a pretty large cluster - 2PB for file-base

Re: [ceph-users] monitoring tool for monitoring end-user

2014-09-09 Thread pragya jain
please somebody reply to clarify it to me. Regards Pragya Jain On Wednesday, 3 September 2014 12:14 PM, pragya jain wrote: > > >hi all! > > >Is there any monitoring tool for ceph which monitor end-user level usage and >data transfer for ceph object storage service? > > >Please help me to

Re: [ceph-users] resizing the OSD

2014-09-09 Thread Martin B Nielsen
Hi, Or did you mean some OSD are near full while others are under-utilized? On Sat, Sep 6, 2014 at 5:04 PM, Christian Balzer wrote: > > Hello, > > On Fri, 05 Sep 2014 15:31:01 -0700 JIten Shah wrote: > > > Hello Cephers, > > > > We created a ceph cluster with 100 OSD, 5 MON and 1 MSD and most o

[ceph-users] NAS on RBD

2014-09-09 Thread Blair Bethwaite
Hi folks, In lieu of a prod ready Cephfs I'm wondering what others in the user community are doing for file-serving out of Ceph clusters (if at all)? We're just about to build a pretty large cluster - 2PB for file-based NAS and another 0.5PB rgw. For the rgw component we plan to dip our toes in a