Re: [ceph-users] recommendations for file sharing

2015-12-17 Thread Alex Leake
?Lin, Thanks for this! I did not see the ownCloud RADOS implementation. I maintain a local ownCloud environment anyway, so this is a really good idea. Have you used it? Regards, Alex. From: lin zhou 周林 Sent: 17 December 2015 02:10 To: Alex Leake; ceph-us

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-17 Thread Wido den Hollander
On 12/17/2015 06:29 AM, Ben Hines wrote: > > > On Wed, Dec 16, 2015 at 11:05 AM, Florian Haas > wrote: > > Hi Ben & everyone, > > > Ben, you wrote elsewhere > > (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html) > that yo

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
And 95-ceph-osd.rules contains the following ? # Check gpt partion for ceph tags and activate ACTION=="add", SUBSYSTEM=="block", \ ENV{DEVTYPE}=="partition", \ ENV{ID_PART_TABLE_TYPE}=="gpt", \ RUN+="/usr/sbin/ceph-disk-udev $number $name $parent" On 17/12/2015 08:29, Jesper Thorhauge wro

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
The non-symlink files in /dev/disk/by-partuuid come to existence because of: * system boots * udev rule calls ceph-disk-udev via 95-ceph-osd.rules on /dev/sda1 * ceph-disk-udev creates the symlink /dev/disk/by-partuuid/c83b5aa5-fe77-42f6-9415-25ca0266fb7f -> ../../sdb1 * ceph-disk activate /d

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Jesper Thorhauge
Hi Loic, Yep, 95-ceph-osd.rules contains exactly that... *** And 95-ceph-osd.rules contains the following ? # Check gpt partion for ceph tags and activate ACTION=="add", SUBSYSTEM=="block", \ ENV{DEVTYPE}=="partition", \ ENV{ID_PART_TABLE_TYPE}=="gpt", \ RUN+="/usr/sbin/ceph-dis

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Jesper Thorhauge
Hi Loic, Sounds like something does go wrong when /dev/sdc3 shows up. Is there anyway i can debug this further? Log-files? Modify the .rules file...? /Jesper The non-symlink files in /dev/disk/by-partuuid come to existence because of: * system boots * udev rule calls ce

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
On 17/12/2015 11:33, Jesper Thorhauge wrote: > Hi Loic, > > Sounds like something does go wrong when /dev/sdc3 shows up. Is there anyway > i can debug this further? Log-files? Modify the .rules file...? Do you see traces of what happens when /dev/sdc3 shows up in boot.log ? > > /Jesper > >

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Jesper Thorhauge
Nope, the previous post contained all that was in the boot.log :-( /Jesper ** - Den 17. dec 2015, kl. 11:53, Loic Dachary skrev: On 17/12/2015 11:33, Jesper Thorhauge wrote: > Hi Loic, > > Sounds like something does go wrong when /dev/sdc3 shows up. Is there anyway > i can

[ceph-users] data partition and journal on same disk

2015-12-17 Thread Dan Nica
Hi, Can I have data an journal on the same disk ? if yes, how ? Thanks -- Dan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Mart van Santen
Hello Dan, Yes, this is de default. They are just two partitions. The ceph-deploy or ceph-disk prepare commands will create two partitions on the disk, one used as journal and one as data partition, normally formatted as xfs. In the data partition there is a file 'journal', which is a simlink to

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Swapnil Jain
Yes you can have it on a different partition on same disk. But not recommend. ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}] ceph-deploy osd prepare osdserver1:sdc1:sdc2 — Swapnil Jain | swap...@linux.com RHC{A,DS,E,I,SA,SA-RHOS,VA}, CE{H,I}, CC{DA,NA}

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
I guess that's the problem you need to solve : why /dev/sdc does not generate udev events (different driver than /dev/sda maybe ?). Once it does, Ceph should work. A workaround could be to add somethink like: ceph-disk-udev 3 sdc3 sdc ceph-disk-udev 4 sdc4 sdc in /etc/rc.local. On 17/12/2015

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Dan Nica
Well I get an error when I try to create data and jurnal on same disk [rimu][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb1 /dev/sdb2 [rimu][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph [rimu][W

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Loic Dachary
Hi, You can try ceph-deploy osd prepare osdserver:/dev/sdb it will create the /dev/sdb1 and /dev/sdb2 partitions for you. Cheers On 17/12/2015 12:41, Dan Nica wrote: > Well I get an error when I try to create data and jurnal on same disk > > > > [rimu][INFO ] Running command: sudo ceph-di

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Michał Chybowski
Or, if You have already set partitions, You can do it with this command: ceph-deploy osd prepare machine:/dev/sdb1:/dev/sdb2 where /dev/sdb1 is Your data partition and /dev/sdb2 is Your journal one. Regards Michał Chybowski Tiktalik.com W dniu 17.12.2015 o 12:46, Loic Dachary pisze: Hi, You c

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Dan Nica
Well after upgrading the system to latest it worked with “prepare osdserver:sdb” Thanks you, Dan From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Michal Chybowski Sent: Thursday, December 17, 2015 2:28 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] data partit

[ceph-users] active+undersized+degraded

2015-12-17 Thread Dan Nica
Hi, After managing to configure the osd server I created a pool "data" and removed pool "rbd" and now the cluster is stuck in active+undersized+degraded $ ceph status cluster 046b0180-dc3f-4846-924f-41d9729d48c8 health HEALTH_WARN 64 pgs degraded 64 pgs stuck unc

Re: [ceph-users] active+undersized+degraded

2015-12-17 Thread Dan Nica
And the osd tree: $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.81180 root default -2 21.81180 host rimu 0 7.27060 osd.0 up 1.0 1.0 1 7.27060 osd.1 up 1.0 1.0 2 7.27060 osd.2 up 1

Re: [ceph-users] active+undersized+degraded

2015-12-17 Thread Burkhard Linke
Hi, On 12/17/2015 01:41 PM, Dan Nica wrote: And the osd tree: $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.81180 root default -2 21.81180 host rimu 0 7.27060 osd.0 up 1.0 1.0 1 7.27060 osd.1 up 1.0

Re: [ceph-users] active+undersized+degraded

2015-12-17 Thread Loris Cuoghi
Le 17/12/2015 13:52, Burkhard Linke a écrit : Hi, On 12/17/2015 01:41 PM, Dan Nica wrote: And the osd tree: $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.81180 root default -2 21.81180 host rimu 0 7.27060 osd.0 up 1.0 1.0 1

Re: [ceph-users] active+undersized+degraded

2015-12-17 Thread Loris Cuoghi
Le 17/12/2015 13:57, Loris Cuoghi a écrit : Le 17/12/2015 13:52, Burkhard Linke a écrit : Hi, On 12/17/2015 01:41 PM, Dan Nica wrote: And the osd tree: $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 21.81180 root default -2 21.81180 host rimu 0 7.27060

Re: [ceph-users] active+undersized+degraded

2015-12-17 Thread Dan Nica
Great, also increased the pg(p) to 128 for that pool, "HEALTH_OK" now :) Thank you Dan From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Burkhard Linke Sent: Thursday, December 17, 2015 2:53 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] active+undersized+degrad

[ceph-users] Metadata Server (MDS) Hardware Suggestions

2015-12-17 Thread Simon Hallam
Hi all, I'm looking at sizing up some new MDS nodes, but I'm not sure if my thought process is correct or not: CPU: Limited to a maximum 2 cores. The higher the GHz, the more IOPS available. So something like a single E5-2637v3 should fulfil this. Memory: The more the better, as the metadata ca

[ceph-users] radosgw problem - 411 http status

2015-12-17 Thread Jacek Jarosiewicz
Hi, I have a strange problem with the rados gateway. I'm getting Http 411 status code (Missing Content Length) whenever I upload any file to ceph. The setup is: ceph 0.94.5, ubuntu 14.04, tengine (patched nginx). The strange thing is - everything worked like a charm until today, when I wante

Re: [ceph-users] [SOLVED] radosgw problem - 411 http status

2015-12-17 Thread Jacek Jarosiewicz
setting the "rgw content length compat" to true solved this... J On 12/17/2015 03:30 PM, Jacek Jarosiewicz wrote: Hi, I have a strange problem with the rados gateway. I'm getting Http 411 status code (Missing Content Length) whenever I upload any file to ceph. The setup is: ceph 0.94.5, ubunt

Re: [ceph-users] Migrate Block Volumes and VMs

2015-12-17 Thread Sebastien Han
What you can do is flatten all the images so you break the relationship between the parent image and the child. Then you can export/import. > On 15 Dec 2015, at 12:10, Sam Huracan wrote: > > Hi everybody, > > My OpenStack System use Ceph as backend for Glance, Cinder, Nova. In the > future, w

[ceph-users] SSD only pool without journal

2015-12-17 Thread Misa
Hello everyone, does it make sense to create SSD only pool from OSDs without journal? From my point of view, the SSDs are so fast that OSD journal on the SSD will not make much of a difference. Cheers Misa ___ ceph-users mailing list ceph-users@list

[ceph-users] Problems with git.ceph.com release.asc keys

2015-12-17 Thread Tim Gipson
Is anyone else experiencing issues when they try to run a “ceph-deploy install” command when it gets to the rpm import of https://git.ceph.com/?p=ceph.git;a=blob_plain;f=keys/release.asc ? I also tried to curl the url with no luck. I get a 504 Gateway time-out error in cephy-deploy. Tim G. S

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-17 Thread Florian Haas
Hey Wido, On Dec 17, 2015 09:52, "Wido den Hollander" wrote: > > On 12/17/2015 06:29 AM, Ben Hines wrote: > > > > > > On Wed, Dec 16, 2015 at 11:05 AM, Florian Haas > > wrote: > > > > Hi Ben & everyone, > > > > > > Ben, you wrote elsewhere > > ( http://lis

Re: [ceph-users] SSD only pool without journal

2015-12-17 Thread Lionel Bouton
Hi, Le 17/12/2015 16:47, Misa a écrit : > Hello everyone, > > does it make sense to create SSD only pool from OSDs without journal? No, because AFAIK you can't have OSDs without journals yet. IIRC there is work done for alternate stores where you wouldn't need journals anymore but it's not yet pr

Re: [ceph-users] SSD only pool without journal

2015-12-17 Thread Loris Cuoghi
Le 17/12/2015 16:47, Misa a écrit : Hello everyone, does it make sense to create SSD only pool from OSDs without journal? From my point of view, the SSDs are so fast that OSD journal on the SSD will not make much of a difference. Cheers Misa ___ cep

[ceph-users] Dealing with radosgw and large OSD LevelDBs: compact, start over, something else?

2015-12-17 Thread Florian Haas
Hey everyone, I recently got my hands on a cluster that has been underperforming in terms of radosgw throughput, averaging about 60 PUTs/s with 70K objects where a freshly-installed cluster with near-identical configuration would do about 250 PUTs/s. (Neither of these values are what I'd consider

Re: [ceph-users] Kernel RBD hang on OSD Failure

2015-12-17 Thread Tom Christensen
I've just checked 1072 and 872, they both look the same, a single op for the object in question, in retry+read state, appears to be retrying forever. On Thu, Dec 17, 2015 at 10:05 AM, Tom Christensen wrote: > I had already nuked the previous hang, but we have another one: > > osdc output: > > 7

[ceph-users] Fwd: Enable RBD Cache

2015-12-17 Thread Sam Huracan
-- Forwarded message -- From: Sam Huracan Date: 2015-12-18 1:03 GMT+07:00 Subject: Enable RBD Cache To: ceph-us...@ceph.com Hi, I'm testing OpenStack Kilo with Ceph 0.94.5, install in Ubuntu 14.04 To enable RBD cache, I follow this tutorial: http://docs.ceph.com/docs/master/rbd

[ceph-users] Enable RBD Cache

2015-12-17 Thread Sam Huracan
Hi, I'm testing OpenStack Kilo with Ceph 0.94.5, install in Ubuntu 14.04 To enable RBD cache, I follow this tutorial: http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova But when I check /var/run/ceph/guests in Compute nodes, there isn't have any asok file. How can I enable RBD

[ceph-users] [Ceph] Not able to use erasure code profile

2015-12-17 Thread quentin.dore
Hello, I try to use the default erasure code profile in Ceph (v0.80.9-0ubuntu0.14.04.2). I have created the following pool : $ ceph osd pool create defaultpool 12 12 erasure And try to put a file in like this : $rados --pool=defaultpool put test test.tar But I am blocked in the process and ne

[ceph-users] Deploying a Ceph storage cluster using Warewulf on Centos-7

2015-12-17 Thread Chu Ruilin
Hi, all I don't know which automation tool is best for deploying Ceph and I'd like to know about. I'm comfortable with Warewulf since I've been using it for HPC clusters. I find it quite convenient for Ceph too. I wrote a set of scripts that can deploy a Ceph cluster quickly. Here is how I did it

[ceph-users] Ceph read errors

2015-12-17 Thread Arseniy Seroka
Hello! I have a following error. Doing `scp` from local storage to ceph I'm getting errors in file's contents. For example: ``` me@kraken:/ceph_storage/lib$ java -jar file.jar Error: Invalid or corrupt jarfile file.jar ``` If I'm checking md5 -> everything is ok. Then I'm going to another server m

Re: [ceph-users] v10.0.0 released

2015-12-17 Thread piotr.da...@ts.fujitsu.com
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Sage Weil > Sent: Monday, November 23, 2015 5:08 PM > > This is the first development release for the Jewel cycle. We are off to a > good start, with lots of performance

Re: [ceph-users] rados bench object not correct errors on v9.0.3

2015-12-17 Thread Dałek , Piotr
> -Original Message- > From: Deneau, Tom [mailto:tom.den...@amd.com] > Sent: Wednesday, August 26, 2015 5:23 PM > To: Dałek, Piotr; Sage Weil > > > There have been some recent changes to rados bench... Piotr, does > > > this seem like it might be caused by your changes? > > > > Yes. My PR

Re: [ceph-users] Initial performance cluster SimpleMessenger vs AsyncMessenger results

2015-12-17 Thread Dałek , Piotr
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Somnath Roy > Sent: Tuesday, October 13, 2015 8:46 AM > > Thanks Haomai.. > Since Async messenger is always using a constant number of threads , there > could be a potent

[ceph-users] problem on ceph installation on centos 7

2015-12-17 Thread Leung, Alex (398C)
Hi, I am trying to install ceph on a Centos 7 system and I get the following error, Thanks in advance for the help. [ceph@hdmaster ~]$ ceph-deploy install mon0 osd0 osd1 osd2 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked

[ceph-users] rbd du

2015-12-17 Thread Allen Liao
Hi all, The online manual (http://ceph.com/docs/master/man/8/rbd/) for rbd has documentation for the 'du' command. I'm running ceph 0.94.2 and that command isn't recognized, nor is it in the man page. Is there another command that will "calculate the provisioned and actual disk usage of all imag

Re: [ceph-users] rados bench object not correct errors on v9.0.3

2015-12-17 Thread Dałek , Piotr
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Sage Weil > Sent: Tuesday, August 25, 2015 7:43 PM > > I have built rpms from the tarball http://ceph.com/download/ceph- > 9.0.3.tar.bz2. > > Have done this for fedora 21

[ceph-users] Moderation queue

2015-12-17 Thread Patrick McGarry
Hey cephers, Looks like there was a list migration a while back and a bunch of things were getting stuck in an admin moderation state and never getting cleared. I have salvaged what I could, but if your message never made it to the list it now never will. Sorry for the inconvenience if there are a

[ceph-users] CoprHD Integrating Ceph

2015-12-17 Thread Patrick McGarry
Hey cephers, In the pursuit of openness I wanted to share a ceph-related bit of work that is happening beyond our immediate sphere of influence and see who is already contributing, or might be interested in the results. https://groups.google.com/forum/?hl=en#!topic/coprhddevsupport/llZeiTWxddM E

Re: [ceph-users] all three mons segfault at same time

2015-12-17 Thread Arnulf Heimsbakk
That's good to hear. My experience was pretty much the same. But depending on the load on the cluster I got a couple of crashes an our to one a day after I upgraded everything. I'm interested to hear if your cluster stays stable over time. -Arnulf On 11/10/2015 07:09 PM, Logan V. wrote: > I am

Re: [ceph-users] all three mons segfault at same time

2015-12-17 Thread Arnulf Heimsbakk
Hi Logan! It seems that I've solved the segfaults on my monitors. Maybe not in the best way, but they seem to be gone. Original my monitor servers ran Ubuntu Trusty on ext4, but they've now been converted to CentOS 7 with XFS as root file system. They've run stable for 24H now. I'm still running

Re: [ceph-users] problem on ceph installation on centos 7

2015-12-17 Thread Bob R
Alex, It looks like you might have an old repo in there with priority=1 so it's not trying to install hammer. Try mv /etc/yum.repos.d/ceph.repo /etc/yum.repos.d/ceph.repo.old && mv /etc/yum.repos.d/ceph.repo.rpmnew /etc/yum.repos.d/ceph.repo then re-run ceph-deploy. Bob On Thu, Dec 10, 2015 at 1

Re: [ceph-users] [Ceph] Not able to use erasure code profile

2015-12-17 Thread ghislain.chevalier
Hi Quentin Did you check the pool was correctly created (Pg allocation)? Envoyé de mon Galaxy Ace4 Orange Message d'origine De : quentin.d...@orange.com Date :17/12/2015 19:45 (GMT+01:00) À : ceph-users@lists.ceph.com Cc : Objet : [ceph-users] [Ceph] Not able to use erasure cod

[ceph-users] Cephfs: large files hang

2015-12-17 Thread Bryan Wright
Hi folks, This is driving me crazy. I have a ceph filesystem that behaves normally when I "ls" files, and behaves normally when I copy smallish files on or off of the filesystem, but large files (~ GB size) hang after copying a few megabytes. This is ceph 0.94.5 under Centos 6.7 under kernel 4.3

Re: [ceph-users] Deploying a Ceph storage cluster using Warewulf on Centos-7

2015-12-17 Thread Chris Jones
Hi Chu, If you can use Chef then: https://github.com/ceph/ceph-chef An example of an actual project can be found at: https://github.com/bloomberg/chef-bcs Chris On Wed, Sep 23, 2015 at 4:11 PM, Chu Ruilin wrote: > Hi, all > > I don't know which automation tool is best for deploying Ceph and I

Re: [ceph-users] Metadata Server (MDS) Hardware Suggestions

2015-12-17 Thread John Spray
On Thu, Dec 17, 2015 at 2:31 PM, Simon Hallam wrote: > Hi all, > > > > I’m looking at sizing up some new MDS nodes, but I’m not sure if my thought > process is correct or not: > > > > CPU: Limited to a maximum 2 cores. The higher the GHz, the more IOPS > available. So something like a single E5-2

Re: [ceph-users] Ceph read errors

2015-12-17 Thread John Spray
On Wed, Aug 12, 2015 at 1:14 PM, Arseniy Seroka wrote: > Hello! I have a following error. > Doing `scp` from local storage to ceph I'm getting errors in file's > contents. > For example: > ``` > me@kraken:/ceph_storage/lib$ java -jar file.jar > Error: Invalid or corrupt jarfile file.jar > ``` > If

Re: [ceph-users] Deploying a Ceph storage cluster using Warewulf on Centos-7

2015-12-17 Thread Shinobu Kinjo
I prefer puppet. Anyhow are you going to use the Ceph cluster for /home, or any kind of computation area like scratch, or on behalf of lustre? I'm just asking you. Thank you, Shinobu - Original Message - From: "Chris Jones" To: "Chu Ruilin" Cc: ceph-us...@ceph.com Sent: Friday, Decembe

Re: [ceph-users] problem on ceph installation on centos 7

2015-12-17 Thread Leung, Alex (398C)
Yup, Thanks for the email. I got through the software installation part But I went into this problem when I try to create. [root@hdmaster ~]# su - ceph Last login: Thu Dec 17 14:32:22 PST 2015 on pts/2 [ceph@hdmaster ~]$ ceph-deploy -v mon create-initial [ceph_deploy.conf][DEBUG ] found configurat

Re: [ceph-users] mount.ceph not accepting options, please help

2015-12-17 Thread Gregory Farnum
On Wed, Dec 16, 2015 at 10:54 AM, Mike Miller wrote: > Hi, > > sorry, the question might seem very easy, probably my bad, but can you > please help me why I am unable to change read ahead size and other options > when mounting cephfs? > > mount.ceph m2:6789:/ /foo2 -v -o name=cephfs,secret=,rs

Re: [ceph-users] v10.0.0 released

2015-12-17 Thread Loic Dachary
The script handles UTF-8 fine, the copy/paste is at fault here ;-) On 24/11/2015 07:59, piotr.da...@ts.fujitsu.com wrote: >> -Original Message- >> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- >> ow...@vger.kernel.org] On Behalf Of Sage Weil >> Sent: Monday, November 23, 2015

[ceph-users] rgw deletes object data when multipart completion request timed out and retried

2015-12-17 Thread Gleb Borisov
Hi everyone, We faced with strange issue on RadosGW (0.94.5-1precise, civet frontend behind nginx). nginx's access.log: 17/Dec/2015:20:34:55 "PUT /ZZZ?uploadId=XXX&partNumber=37 HTTP/1.1" 200 17/Dec/2015:20:34:57 "PUT /ZZZ?uploadId=XXX&partNumber=39 HTTP/1.1" 200 17/Dec/2015:20:35:47 "POST /ZZZ?u

Re: [ceph-users] Problems with git.ceph.com release.asc keys

2015-12-17 Thread Gregory Farnum
Apparently the keys are now at https://download.ceph.com/keys/release.asc and you need to upgrade your ceph-deploy (or maybe just change a config setting? I'm not really sure). -Greg On Thu, Dec 17, 2015 at 7:51 AM, Tim Gipson wrote: > Is anyone else experiencing issues when they try to run a “ce

Re: [ceph-users] Cephfs: large files hang

2015-12-17 Thread Gregory Farnum
On Thu, Dec 17, 2015 at 11:43 AM, Bryan Wright wrote: > Hi folks, > > This is driving me crazy. I have a ceph filesystem that behaves normally > when I "ls" files, and behaves normally when I copy smallish files on or off > of the filesystem, but large files (~ GB size) hang after copying a few >

Re: [ceph-users] Metadata Server (MDS) Hardware Suggestions

2015-12-17 Thread Gregory Farnum
On Thu, Dec 17, 2015 at 2:06 PM, John Spray wrote: > On Thu, Dec 17, 2015 at 2:31 PM, Simon Hallam wrote: >> Hi all, >> >> >> >> I’m looking at sizing up some new MDS nodes, but I’m not sure if my thought >> process is correct or not: >> >> >> >> CPU: Limited to a maximum 2 cores. The higher the

[ceph-users] cephfs, low performances

2015-12-17 Thread Francois Lafont
Hi, I have ceph cluster currently unused and I have (to my mind) very low performances. I'm not an expert in benchs, here an example of quick bench: --- # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=readwrite

Re: [ceph-users] cephfs, low performances

2015-12-17 Thread Christian Balzer
Hello, On Fri, 18 Dec 2015 03:36:12 +0100 Francois Lafont wrote: > Hi, > > I have ceph cluster currently unused and I have (to my mind) very low > performances. I'm not an expert in benchs, here an example of quick > bench: > > --- >

Re: [ceph-users] Cephfs: large files hang

2015-12-17 Thread Chris Dunlop
Hi Bryan, Have you checked your MTUs? I was recently bitten by large packets not getting through where small packets would. (This list, Dec 14, "All pgs stuck peering".) Small files working but big files not working smells like it could be a similar problem. Cheers, Chris On Thu, Dec 17, 2015