Re: [ceph-users] MDS cluster degraded after upgrade to dumpling

2013-08-21 Thread Damien Churchill
I've uploaded the complete log[0]. It's about 70MB just as a warning. [0] damoxc.net/ceph-mds.ceph2.log.1.gz On 21 August 2013 07:07, Gregory Farnum wrote: > Do you have full logs from the beginning of replay? I believe you > should only see this when a client is reconnecting to the MDS with > f

[ceph-users] Object Store Multipart Upload

2013-08-21 Thread Juan Pablo FRANÇOIS
Hello, I'm trying to upload a multipart file to the radosgw (v 0.67.1) using the Amazon S3 API (v 1.5.3) following the example in http://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html. The upload seems to go well, there are no errors in the logs and if I use s3cmd to check the fil

Re: [ceph-users] Ceph-fuse timeout?

2013-08-21 Thread Petr Soukup
Thank you Greg. It makes sense, that ceph-fuse will work this way. I am experiencing this problem for example when I run rsync on folder with lot of small files in ceph (~300GB in 10kB files). Ceph is during that very slow and everything crash like domino. I will look into other ways of reading

Re: [ceph-users] do not upgrade bobtail -> dumpling directly until 0.67.2

2013-08-21 Thread Jeff Bachtel
Is there an issue ID associated with this? For those of us who made the long jump and want to avoid any unseen problems. Thanks, Jeff On Tue, Aug 20, 2013 at 7:57 PM, Sage Weil wrote: > We've identified a problem when upgrading directly from bobtail to > dumpling; please wait until 0.67.2 bef

Re: [ceph-users] Poor write/random read/random write performance

2013-08-21 Thread Da Chun Ng
Mark, I tried with journal aio = true, and op thread = 4, but made little difference.Then I tried to enlarge read ahead value both on the osd block devices and cephfs client. It did improve some overall performance, especially the sequential read performance. But still has not much help to the

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-21 Thread Pavel Timoschenkov
Hi. Thanks for patch. But after patched ceph src and install it, I have not ceph-disk or ceph-deploy command. I did the following steps: git clone --recursive https://github.com/ceph/ceph.git patch -p0 < ./autogen.sh ./configure make make install What am I doing wrong? -Original Message-

Re: [ceph-users] do not upgrade bobtail -> dumpling directly until 0.67.2

2013-08-21 Thread Ian Colle
http://tracker.ceph.com/issues/6057 Ian R. Colle Director of Engineering, Inktank Twitter: ircolle LinkedIn: www.linkedin.com/in/ircolle On Aug 21, 2013, at 6:41, Jeff Bachtel wrote: > Is there an issue ID associated with this? For those of us who made the long > jump and want to avoid any uns

[ceph-users] MDS "unable to authenticate as client.admin" when deploying Ceph with Puppet

2013-08-21 Thread Stroppa Daniele (strp)
Hi All, I'm deploying a small Ceph cluster (1 MON, 4 OSD, 1MDS) using a modified version of the ENovance Puppet modules (https://github.com/enovance/puppet-ceph) (I'm using Dumpling on Ubuntu 12.04 instead of Bobtail on Debian). All goes well until themes is deployed and I get the following er

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-21 Thread Alfredo Deza
On Wed, Aug 21, 2013 at 9:33 AM, Pavel Timoschenkov wrote: > Hi. Thanks for patch. But after patched ceph src and install it, I have not > ceph-disk or ceph-deploy command. > I did the following steps: > git clone --recursive https://github.com/ceph/ceph.git > patch -p0 < > ./autogen.sh > ./conf

Re: [ceph-users] MDS "unable to authenticate as client.admin" when deploying Ceph with Puppet

2013-08-21 Thread Gregory Farnum
Sounds like the puppet scripts haven't put the client.admin keyring on that node, or it's in the wrong place. Alternatively, there's a different keyring they're supposed to be using but it's not saying so in the command. -Greg On Wednesday, August 21, 2013, Stroppa Daniele (strp) wrote: > Hi Al

Re: [ceph-users] Ceph-fuse timeout?

2013-08-21 Thread Gregory Farnum
On Wednesday, August 21, 2013, Petr Soukup wrote: > Thank you Greg. It makes sense, that ceph-fuse will work this way. > I am experiencing this problem for example when I run rsync on folder with > lot of small files in ceph (~300GB in 10kB files). Ceph is during that very > slow and everything cr

Re: [ceph-users] Poor write/random read/random write performance

2013-08-21 Thread Mark Nelson
On 08/21/2013 07:58 AM, Da Chun Ng wrote: > Mark, > > I tried with journal aio = true, and op thread = 4, but made little > difference. > Then I tried to enlarge read ahead value both on the osd block devices > and cephfs client. It did improve some overall performance, especially > the sequent

[ceph-users] GPS tracker with multi discrete与您共享了相册。

2013-08-21 Thread GPS tracker with multi discrete
Tips: GPS tracker with multi discrete input and output /Attn: purchase manager Dear Sir This is Anna,the sales manager of Redview GPS in China. VT310 is a GPS tracker with 5 discrete inputs ,5 discrete outputs and 2 analog ports . With VT310,you can get vehicle windows status

[ceph-users] Multiple CephFS filesystems per cluster

2013-08-21 Thread Guido Winkelmann
Hi, Is it possible to have more than one CephFS filesystem per Ceph cluster? In the default configuration, a ceph cluster has got only one filesystem, and you can mount that or nothing. Is it possible somehow to have several distinct filesystems per cluster, preferably with access controls that

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Oliver Daudey
Hey Samuel, I had a good run on the production-cluster with it and unfortunately, it still doesn't seem to have solved the problem. It seemed OK for a while and individual OSD CPU-usage seemed quite low, but as the cluster's load increased during the day, things got slower again. Write-performan

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Samuel Just
There haven't been any significant osd side changes that I can think of. Is cpu usage still high? If so, can you post the profiler results again? -Sam On Wed, Aug 21, 2013 at 12:02 PM, Oliver Daudey wrote: > Hey Samuel, > > I had a good run on the production-cluster with it and unfortunately, i

Re: [ceph-users] Ceph-fuse timeout?

2013-08-21 Thread Petr Soukup
Files are subdivided to folders by 1000 in each folder (/img/10/21/10213456.jpg etc.) to increase performance. I just discovered, that it would make much more sense to read images in webserver using radosgw (S3 API?) - if there will be problem on ceph server, it shouldn't affect connected clien

Re: [ceph-users] Multiple CephFS filesystems per cluster

2013-08-21 Thread Gregory Farnum
On Wed, Aug 21, 2013 at 11:33 AM, Guido Winkelmann wrote: > Hi, > > Is it possible to have more than one CephFS filesystem per Ceph cluster? > > In the default configuration, a ceph cluster has got only one filesystem, and > you can mount that or nothing. Is it possible somehow to have several dis

Re: [ceph-users] Ceph-fuse timeout?

2013-08-21 Thread Gregory Farnum
On Wed, Aug 21, 2013 at 12:20 PM, Petr Soukup wrote: > Files are subdivided to folders by 1000 in each folder > (/img/10/21/10213456.jpg etc.) to increase performance. In that case the stability issues are probably with your OSDs being overloaded by write requests that aren't being appropriately

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Oliver Daudey
Hey Samuel, CPU-usage still seems a bit higher, but not always equally on every OSD. I profiled the node with the most CPU-usage on the OSD. Note the libleveldb-related stuff right at the top. The Cuttlefish-OSD doesn't show those at all. Could those be related to the problem? OSD version 0.6

[ceph-users] CORS not working

2013-08-21 Thread Jeppesen, Nelson
Hello, I'm having issues with setting cors on dumpling. I seems like it's not doing anything. I have the following CORS rule on test1 bucket: GETPOST http://a.a.a * When test with the following I'm missing t

[ceph-users] Storage, File Systems and Data Scrubbing

2013-08-21 Thread Johannes Klarenbeek
ave. Did anyone experiment with file systems that disabled journaling and how did it perform? Regards, Johannes __ Informatie van ESET Endpoint Antivirus, versie van database viruskenmerken 8713 (20130821) __ Het bericht is gecontroleerd door

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Oliver Daudey
Hey Samuel, Finally got it reproduced on my test-cluster, which was otherwise unloaded at the time. First, with Dumpling: # rbd create --size 102400 test # ceph-osd --version ceph version 0.67.1-6-g0c4f2f3 (0c4f2f34b78b634efe7f4d56694e2edeeda5a130) # rbd bench-write test bench-write io_size 409

Re: [ceph-users] Storage, File Systems and Data Scrubbing

2013-08-21 Thread Mike Lowe
ing and how did > it perform? > > Regards, > Johannes > > > > > > > __ Informatie van ESET Endpoint Antivirus, versie van database > viruskenmerken 8713 (20130821) __ > > Het bericht is gecontroleerd

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Samuel Just
Try it again in the reverse order, I strongly suspect caching effects. -Sam On Wed, Aug 21, 2013 at 1:34 PM, Oliver Daudey wrote: > Hey Samuel, > > Finally got it reproduced on my test-cluster, which was otherwise > unloaded at the time. First, with Dumpling: > > # rbd create --size 102400 test

Re: [ceph-users] CORS not working

2013-08-21 Thread Yehuda Sadeh
On Wed, Aug 21, 2013 at 1:19 PM, Jeppesen, Nelson wrote: > Hello, > > I'm having issues with setting cors on dumpling. I seems like it's not doing > anything. > > I have the following CORS rule on test1 bucket: > > > > > GETPOST > http://a.a.a >

Re: [ceph-users] CORS not working

2013-08-21 Thread Jeppesen, Nelson
Fantastic! Thank you for the quick response. I'm available if you need any testing. Nelson Jeppesen Disney Technology Solutions and Services Phone 206-588-5001 -Original Message- From: Yehuda Sadeh [mailto:yeh...@inktank.com] Sent: Wednesday, August 21, 2013 2:22 PM To: Jeppesen,

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Oliver Daudey
Hey Samuel, I repeated the same test several times before my post and just now 2 more times. It holds up and is also repeatable in reverse order, with the same results. Remember, I restart all OSDs between tests, so any caches should get destroyed and besides, I'm writing. That shouldn't involv

Re: [ceph-users] Multiple CephFS filesystems per cluster

2013-08-21 Thread James Harper
> Hi, > > Is it possible to have more than one CephFS filesystem per Ceph cluster? > > In the default configuration, a ceph cluster has got only one filesystem, and > you can mount that or nothing. Is it possible somehow to have several > distinct > filesystems per cluster, preferably with access

Re: [ceph-users] Storage, File Systems and Data Scrubbing

2013-08-21 Thread Johannes Klarenbeek
ore like a paranoid super fail save. Did anyone experiment with file systems that disabled journaling and how did it perform? Regards, Johannes __ Informatie van ESET Endpoint Antivirus, versie van database viruskenmerken 8713 (20130821)

[ceph-users] lvm for a quick ceph lab cluster test

2013-08-21 Thread Liu, Larry
Hi guys, I'm a newbie in ceph. Wonder if I can use 2~3 LVM disks on each server, total 2 servers to run a quick ceph clustering tests. Thanks! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Samuel Just
You'd need to unmount the fs to actually clear the cache. Did you see a significant difference in load between the runs? To confirm, the rbd client is dumpling the entire time? -Sam On Wed, Aug 21, 2013 at 2:28 PM, Oliver Daudey wrote: > Hey Samuel, > > I repeated the same test several times be

Re: [ceph-users] Storage, File Systems and Data Scrubbing

2013-08-21 Thread Mike Lowe
> In a normal/single file system I truly see the value of journaling and the > potential for btrfs (although it’s still very slow). However in a system like > ceph, journaling seems to me more like a paranoid super fail save. > > Did anyone experiment with file

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Samuel Just
I am dumb. There *has* been a change in the osd which can account for this: the wbthrottle limits. We added some logic to force the kernel to start flushing writes out earlier, normally a good thing. In this case, it's probably doing an fsync every 500 writes. Can you run 3 tests? 1) rerun with

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Oliver Daudey
Hey Samuel, On wo, 2013-08-21 at 18:33 -0700, Samuel Just wrote: > I am dumb. There *has* been a change in the osd which can account for > this: the wbthrottle limits. We added some logic to force the kernel > to start flushing writes out earlier, normally a good thing. In this > case, it's pro

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-21 Thread Samuel Just
I think the rbd cache one you'd need to run for a few minutes to get meaningful results. It should stabilize somewhere around the actual throughput of your hardware. Hmm, 10k ios I guess is only 10 rbd chunks. What replication level are you using? Try setting them to 1000 (you only need to

[ceph-users] One rados account, more S3 API keyes

2013-08-21 Thread Mihály Árva-Tóth
Hello, Is there any method to one radosgw user has more than one access/secret_key? Thank you, Mihaly ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com