[ceph-users] What's the status of feature: S3 object versioning?

2014-01-02 Thread Ray Lv
Hi there, Noted that there is a Blueprint item about S3 object versioning in radosgw for Firefly at http://wiki.ceph.com/Planning/Blueprints/Firefly/rgw%3A_object_versioning And Sage has announced v0.74 release for Firefly. Do you guys know the status of this feature? Thanks, Ray _

Re: [ceph-users] Monitoring ceph operations latency

2014-01-02 Thread Tarmo Trumm
Happy new year everyone! Still struggeling with this issue, On Dec 30, 2013, at 11:40 AM, Tarmo Trumm wrote: > Hi! > > I can get avarage of write and read latency with ceph perf counters and i can > see latency with rados bench. > Has anyone a idea how to monitor ceph operations like read/wr

[ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Dietmar Maurer
I try to understand the default crush rule: rule data { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } Is this the same as: rule data { ruleset 0 type replic

[ceph-users] How to setup ceph object gateway storage

2014-01-02 Thread haiquan517
Hi , Recently , We need to test ceph object storage , but we are not find a detail guide line , the officer web just provide simple guide line , we can't setup it according to the officer guide line. if any where have other detail guide line? thanks a lot !! :) ___

Re: [ceph-users] Monitoring ceph operations latency

2014-01-02 Thread Haomai Wang
Do you mean run "rados put ..." command and see the latency or others? On Thu, Jan 2, 2014 at 4:43 PM, Tarmo Trumm wrote: > Happy new year everyone! > > Still struggeling with this issue, > > > On Dec 30, 2013, at 11:40 AM, Tarmo Trumm wrote: > >> Hi! >> >> I can get avarage of write and read la

Re: [ceph-users] Restrict user access per bucket

2014-01-02 Thread Jaseer TK
Thanks Wei, I am bit confused about specifying the request entities while making the PUT request. Would be great if you could give some guidance. 1. I tried the above method, using aws s3 sdk for php. It is failing for putbucketacl call. my php code is given below. I tried to do like this, http:

[ceph-users] radosgw package - missing deps on Ubuntu < 13.04

2014-01-02 Thread Ritter Sławomir
We have a problem with upgrade ceph/radosgw to a latest version 0.67-5 on Ubuntu 12.04 LTS. It seems that there is a missing dependency for amd64 platform: http://ceph.com/debian-dumpling/dists/precise/main/binary-i386/Packages Package: radosgw Depends: libcurl3-gnutls (>= 7.16.2-1) http://cep

Re: [ceph-users] Monitor configuration issue

2014-01-02 Thread Joao Eduardo Luis
On 01/01/2014 09:29 PM, Matt Rabbitt wrote: I only have four because I want to remove the original one I used to create the cluster. I tried what you suggested and rebooted all my nodes but I'm still having the same problem. I'm running Emperor on Ubuntu 12.04 on all my nodes by the way. Here

[ceph-users] ceph and collectd

2014-01-02 Thread Dennis Zou (yunzou)
Hi, I use collectd(https://github.com/ceph/collectd-4.10.1) to monitor ceph. But get stuck, the collectd log looks like this: [2014-01-02 17:18:37] [debug] did cconn_prepare(name=osd.0,i=0,st=2) [2014-01-02 17:18:47] [debug] did cconn_prepare(name=osd.0,i=0,st=2) [2014-01-02 17:18:47] [warning] ER

Re: [ceph-users] radosgw package - missing deps on Ubuntu < 13.04

2014-01-02 Thread Wido den Hollander
On 01/02/2014 02:03 PM, Ritter Sławomir wrote: We have a problem with upgrade ceph/radosgw to a latest version 0.67-5 on Ubuntu 12.04 LTS. It seems that there is a missing dependency for amd64 platform: http://ceph.com/debian-dumpling/dists/precise/main/binary-i386/Packages Package: radosgw D

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Dino Yancey
Having your journals on the same disk causes all data to be written twice, i.e. once to the journal and once to the osd store. Notice that your tested throughput is slightly more than half your expected maximum... On Wed, Jan 1, 2014 at 11:32 PM, Dietmar Maurer wrote: > Hi all, > > > > I run

Re: [ceph-users] Monitor configuration issue

2014-01-02 Thread Matt Rabbitt
Healthy: e4: 4 mons at {storage1= 10.0.10.11:6789/0,storage2=10.0.10.12:6789/0,storage3=10.0.10.13:6789/0,storage4=10.0.10.14:6789/0}, election epoch 54, quorum 0,1,2,3 storage1,storage2,storage3,storage4 After storage1 goes down I get this over and over again: 014-01-02 09:16:23.789271 7fbbc82f

[ceph-users] Ceph 0.74

2014-01-02 Thread Julien Calvet
Hello all Happy new Year 2014 Just a little question, is 0.74 ready for production ? Regards Julien ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Wido den Hollander
On 01/02/2014 10:40 AM, Dietmar Maurer wrote: I try to understand the default crush rule: rule data { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } Is this the same

Re: [ceph-users] Ceph 0.74

2014-01-02 Thread Joao Eduardo Luis
On 01/02/2014 02:10 PM, Julien Calvet wrote: Hello all Happy new Year 2014 Just a little question, is 0.74 ready for production ? 0.74 is a dev release. You should probably run stable versions in production. -Joao Regards Julien -- Joao Eduardo Luis Software Engineer | http://in

Re: [ceph-users] Monitor configuration issue

2014-01-02 Thread Wido den Hollander
On 01/02/2014 03:19 PM, Matt Rabbitt wrote: Healthy: e4: 4 mons at {storage1=10.0.10.11:6789/0,storage2=10.0.10.12:6789/0,storage3=10.0.10.13:6789/0,storage4=10.0.10.14:6789/0 }, election e

[ceph-users] [Rados] How long will it take to fix a broken replica

2014-01-02 Thread Kuo Hugo
Hi all, I did a test to ensure Rados's recovering. 1. echo string into a object from a placement group's directory on a OSD. 2. After osd scrub, the ceph health shows " 1pgs inconsistent " . Will it be fixed later? Thanks ___ ceph-users mailing list ce

Re: [ceph-users] radosgw package - missing deps on Ubuntu < 13.04

2014-01-02 Thread Ritter Sławomir
I fixed it by adding missing "Ceph Extras" package repository to our APT sources: http://ceph.com/docs/master/install/get-packages/#add-ceph-extras Regards, SR -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido den Ho

[ceph-users] 2014-01-02 05:46:13.398699 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480025a0) . fault

2014-01-02 Thread xyx
Hello,My Ceph Teacher:I just finished my ceph configuration: Configured as follows:[global] auth cluster required = none auth service required = noneauth client required = none[osd] osd journal size = 1000 osd data = /home/osd$idosd journ

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Dietmar Maurer
> Having your journals on the same disk causes all data to be written twice, > i.e. > once to the journal and once to the > osd store.  Notice that your tested throughput is slightly more than half your > expected maximum... But AFAIK OSD bench already considers journal writes. The disk can write

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Sage Weil
On Thu, 2 Jan 2014, Wido den Hollander wrote: > On 01/02/2014 10:40 AM, Dietmar Maurer wrote: > > I try to understand the default crush rule: > > > > rule data { > > > > ruleset 0 > > > > type replicated > > > > min_size 1 > > > > max_size 10 > > > >

Re: [ceph-users] Ceph 0.74

2014-01-02 Thread Loic Dachary
On 02/01/2014 15:10, Julien Calvet wrote: > Hello all > > Happy new Year 2014 > > Just a little question, is 0.74 ready for production ? > Hi Julien, And happy new year ! 0.74 is a development release. The next production release will be Firefly. Cheers > > Regards > > Julien >

[ceph-users] Is cepf feasible for storing large no. of small files with care of handling OSD failure(disk i/o error....) so it can complete pending replication independent from no. of files

2014-01-02 Thread upendrayadav.u
Hi,1. Is ceph is feasible for storing large no. of small files in ceph cluster with care of osd failure and recovery process.2. if we have 4TB OSD(almosst 85% full) and storing only small size files(500 KB to 1024 KB), And it got failed(due to disk i/o error) then how much time it will take to

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Sage Weil
On Thu, 2 Jan 2014, Dietmar Maurer wrote: > > > iirc, chooseleaf goes down the tree and descents into multiple leafs > > > to find what you are looking for. > > > > > > choose goes into that leaf and tries to find what you are looking for > > > without going into subtrees. > > > > Right. To a fir

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Dietmar Maurer
> > iirc, chooseleaf goes down the tree and descents into multiple leafs > > to find what you are looking for. > > > > choose goes into that leaf and tries to find what you are looking for > > without going into subtrees. > > Right. To a first approximation, these rules are equivalent. The diffe

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Dietmar Maurer
> The other difference is if you have one of the two OSDs on the host marked > out. > In the choose case, the remaining OSD will get allocated 2x the data; in the > chooseleaf case, usage will remain proportional with the rest of the cluster > and > the data from the out OSD will be distributed a

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Dietmar Maurer
> -Original Message- > From: Stefan Priebe [mailto:s.pri...@profihost.ag] > Sent: Donnerstag, 02. Jänner 2014 18:36 > To: Dietmar Maurer; Dino Yancey > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] rados benchmark question > > Hi, > > Am 02.01.2014 17:10, schrieb Dietmar Mau

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Stefan Priebe
Am 02.01.2014 18:48, schrieb Dietmar Maurer: -Original Message- From: Stefan Priebe [mailto:s.pri...@profihost.ag] Sent: Donnerstag, 02. Jänner 2014 18:36 To: Dietmar Maurer; Dino Yancey Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] rados benchmark question Hi, Am 02.01.20

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Stefan Priebe
Hi, Am 02.01.2014 17:10, schrieb Dietmar Maurer: Having your journals on the same disk causes all data to be written twice, i.e. once to the journal and once to the osd store. Notice that your tested throughput is slightly more than half your expected maximum... But AFAIK OSD bench already co

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Dietmar Maurer
> > # iostat -x 5 (after about 30 seconds) > > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz > > avgqu-sz > await r_await w_await svctm %util > > sdb 0.00 3.800.00 187.40 0.00 84663.60 903.56 > > 157.62 > 796.930.00 796.93 5.34

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Sage Weil
--test IIRC. Yep! sage On Thu, 2 Jan 2014, Dietmar Maurer wrote: > > The other difference is if you have one of the two OSDs on the host marked > > out. > > In the choose case, the remaining OSD will get allocated 2x the data; in the > > chooseleaf case, usage will remain proportional with th

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Stefan Priebe
Am 02.01.2014 19:06, schrieb Dietmar Maurer: # iostat -x 5 (after about 30 seconds) Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdb 0.00 3.800.00 187.40 0.00 84663.60 903.56 157.62 796

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Dietmar Maurer
> >> so your disks are completely utilized and can't keep up see %util and > >> await. > > > > But it say it writes at 80MB/s, so that would be about 40MB/s for > > data? And 40*6=240 (not 190) > > Did you miss the replication factor? I think it should be: > 40MB/s*6/3 => 80MB/s My test pool use

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Stefan Priebe
Am 02.01.2014 19:16, schrieb Dietmar Maurer: so your disks are completely utilized and can't keep up see %util and await. But it say it writes at 80MB/s, so that would be about 40MB/s for data? And 40*6=240 (not 190) Did you miss the replication factor? I think it should be: 40MB/s*6/3 => 80M

Re: [ceph-users] radosgw package - missing deps on Ubuntu < 13.04

2014-01-02 Thread Sage Weil
On Thu, 2 Jan 2014, Wido den Hollander wrote: > On 01/02/2014 02:03 PM, Ritter S?awomir wrote: > > We have a problem with upgrade ceph/radosgw to a latest version 0.67-5 > > on Ubuntu 12.04 LTS. It seems that there is a missing dependency for > > amd64 platform: > > > > http://ceph.com/debian-dump

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Dietmar Maurer
> >> Did you miss the replication factor? I think it should be: > >> 40MB/s*6/3 => 80MB/s > > > > My test pool use size=1 (no replication) > > ok out of ideas... ;-( sorry What values do you get? (osd bench vs. rados benchmar with pool size=1) ___ ceph

Re: [ceph-users] rados benchmark question

2014-01-02 Thread Stefan Priebe
Am 02.01.2014 19:38, schrieb Dietmar Maurer: Did you miss the replication factor? I think it should be: 40MB/s*6/3 => 80MB/s My test pool use size=1 (no replication) ok out of ideas... ;-( sorry What values do you get? (osd bench vs. rados benchmar with pool size=1) i have no idle clust

[ceph-users] CephFS files not appearing in DF (or rados ls)

2014-01-02 Thread Alex Pearson
Hi All, I've built a fairly standard ceph cluster up (I think), and believe I have everything configured correctly with MDS, only I'm seeing something very strange. Files I write with CephFS don't appear in ANY pools at all? For example, below shows the configured pools and MDS setup correctly

[ceph-users] vm fs corrupt after pgs stuck

2014-01-02 Thread James Harper
I just had to restore an ms exchange database after an ceph hiccup (no actual data lost - Exchange is very good like that with its no loss restore!). The order of events went something like: . Loss of connection on osd to the cluster network (public network was okay) . pgs reported stuck . stopp

Re: [ceph-users] vm fs corrupt after pgs stuck

2014-01-02 Thread James Harper
> > I just had to restore an ms exchange database after an ceph hiccup (no actual > data lost - Exchange is very good like that with its no loss restore!). The > order > of events went something like: > > . Loss of connection on osd to the cluster network (public network was okay) > . pgs report

Re: [ceph-users] CephFS files not appearing in DF (or rados ls)

2014-01-02 Thread Alex Pearson
Hi All, Victory! Found the issue, it was a mistake on my part, however it does raise another questions... The issue was: root@osh1:~# ceph --cluster apics auth list installed auth entries: client.cuckoo key: AQBjTblS4AFAARAAZyumzFyk2JS8d9AjutRoTQ== caps: [mon] allow r ca

Re: [ceph-users] Ceph as offline S3 substitute and peer-to-peer fileshare?

2014-01-02 Thread Alek Storm
Anything? Would really appreciate any wisdom at all on this. Thanks, Alek On Wed, Nov 27, 2013 at 4:13 PM, Alek Storm wrote: > Hi all, > > I'd like to use Ceph to solve two problems at my company: to be an S3 mock > for testing our application, and for sharing test artifacts in a > peer-to-pee

Re: [ceph-users] Ceph as offline S3 substitute and peer-to-peer fileshare?

2014-01-02 Thread Dimitri Maziuk
On 01/02/2014 04:20 PM, Alek Storm wrote: > Anything? Would really appreciate any wisdom at all on this. I think what you're looking for is called git. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu signature.asc Description: OpenPGP digital signat

Re: [ceph-users] Ceph as offline S3 substitute and peer-to-peer fileshare?

2014-01-02 Thread Sage Weil
This will not work well because of Ceph's consistency model. It generally won't let you read/write unless you have a quorum of monitors. I suspect what you really want is something more like a private ceph install (in your DC) and with something like owncloud running on top to give you a dropb

[ceph-users] subscribe ceph-users mailing list

2014-01-02 Thread Sean Cao
Send ceph-users mailing list submissions to ceph-users@lists.ceph.com Best Regards 图片1 曹世银 Sean Cao ZeusCloud Storage Engineer 移动电话 : +86-13162662069 电子邮件 : sean_...@zeuscloud.cn 电话 : +86-21-5169 5876 分机 : 1043 传真 : +86-21-5169 5876 地址 :

[ceph-users] How to setup ceph object gateway storage

2014-01-02 Thread haiquan517
Hi , Recently , We need to test ceph object storage , but we are not find a detail guide line , the officer web just provide simple guide line , we can't setup it according to the officer guide line. if any where have other detail guide line? thanks a lot !! :)

Re: [ceph-users] How to setup ceph object gateway storage

2014-01-02 Thread Gao, Wei M
I am wondering what kind of issue you met. maybe you can post your issues here. Best Regards Wei From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of haiquan...@sina.com Sent: Friday, January 3, 2014 9:16 AM To: ceph-users; james.harper Subject: [ceph-

Re: [ceph-users] Ceph as offline S3 substitute and peer-to-peer fileshare?

2014-01-02 Thread Aaron Ten Clay
Alek, Not sure if it's the right tool, but you might also consider BitTorrent Sync[1]. 1: http://www.bittorrent.com/sync -Aaron On Thu, Jan 2, 2014 at 3:01 PM, Dimitri Maziuk wrote: > On 01/02/2014 04:20 PM, Alek Storm wrote: > > Anything? Would really appreciate any wisdom at all on this. >

Re: [ceph-users] 2014-01-02 05:46:13.398699 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480025a0) . fault

2014-01-02 Thread Ирек Фасихов
You need to replace # with ; 02 янв. 2014 г. 19:58 пользователь "xyx" написал: > *Hello,My Ceph Teacher:* > I just finished my ceph configuration: > Configured as follows: > [global] > auth cluster required = none > auth service required = none > au

[ceph-users] how to use the function ceph_open_layout

2014-01-02 Thread
| Hi all; today, I want to use the fuction of ceph_open_layout() in libcephFs.h I creat a new pool success, # rados mkpool data1 and then I edit the code like this: int fd = ceph_open_layout( cmount, c_path, O_RDONLY|O_CREAT, 0666. (1<<22), 1, (1<<22) , "data1") and then the fd is -22!!

[ceph-users] backfill_toofull issue - can reassign PGs to different server?

2014-01-02 Thread Indra Pramana
Dear all, I have 4 servers with 4 OSDs / drives each, so total I have 16 OSDs. For some reason, the last server is over-utilised compared to the first 3 servers, causing all the OSDs on the fourth server: osd.12, osd.13, osd.14 and osd.15 to be near full (above 85%). /dev/sda1 458140932 3934

[ceph-users] snapshot atomicity

2014-01-02 Thread James Harper
I've not used ceph snapshots before. The documentation says that the rbd device should not be in use before creating a snapshot. Does this mean that creating a snapshot is not an atomic operation? I'm happy with a crash consistent filesystem if that's all the warning is about. If it is atomic,

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Dietmar Maurer
> In both cases, you only get 2 replicas on the remaining 2 hosts. OK, I was able to reproduce this with crushtool. > The difference is if you have 4 hosts with 2 osds. In the choose case, you > have > some fraction of the data that chose the down host in the first step (most of > the > attemp

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-02 Thread Dietmar Maurer
I also don't really understand why crush selects OSDs with weight=0 host prox-ceph-3 { id -4 # do not change unnecessarily # weight 3.630 alg straw hash 0 # rjenkins1 item osd.4 weight 0 } root default { id -1 # do not chan