Re: [ceph-users] Cache tier weirdness

2016-03-01 Thread Christian Balzer
Talking to myself again ^o^, see below: On Sat, 27 Feb 2016 01:48:49 +0900 Christian Balzer wrote: > > Hello Nick, > > On Fri, 26 Feb 2016 09:46:03 - Nick Fisk wrote: > > > Hi Christian, > > > > > -Original Message- > > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.co

Re: [ceph-users] Cannot mount cephfs after some disaster recovery

2016-03-01 Thread Yan, Zheng
On Tue, Mar 1, 2016 at 11:51 AM, 1 <10...@candesoft.com> wrote: > Hi, > I meet a trouble on mount the cephfs after doing some disaster recovery > introducing by official > document(http://docs.ceph.com/docs/master/cephfs/disaster-recovery). > Now when I try to mount the cephfs, I get "m

Re: [ceph-users] Cannot mount cephfs after some disaster recovery

2016-03-01 Thread John Spray
On Tue, Mar 1, 2016 at 3:51 AM, 1 <10...@candesoft.com> wrote: > Hi, > I meet a trouble on mount the cephfs after doing some disaster recovery > introducing by official > document(http://docs.ceph.com/docs/master/cephfs/disaster-recovery). > > Now when I try to mount the cephfs, I get "

Re: [ceph-users] Cache tier weirdness

2016-03-01 Thread Nick Fisk
Interesting... see below > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Christian Balzer > Sent: 01 March 2016 08:20 > To: ceph-users@lists.ceph.com > Cc: Nick Fisk > Subject: Re: [ceph-users] Cache tier weirdness > > > > Talking to my

Re: [ceph-users] s3 bucket creation time

2016-03-01 Thread Luis Periquito
On Mon, Feb 29, 2016 at 11:20 PM, Robin H. Johnson wrote: > On Mon, Feb 29, 2016 at 04:58:07PM +, Luis Periquito wrote: >> Hi all, >> >> I have a biggish ceph environment and currently creating a bucket in >> radosgw can take as long as 20s. >> >> What affects the time a bucket takes to be cre

Re: [ceph-users] s3 bucket creation time

2016-03-01 Thread Abhishek Varshney
I was once faced with similar issue. Did you try increasing the rgw log level and see what's happening? In my case, it was lot of gc happening on rgw cache which was causing latent operations. Thanks Abhishek On Tue, Mar 1, 2016 at 3:35 PM, Luis Periquito wrote: > On Mon, Feb 29, 2016 at 11:20 P

Re: [ceph-users] Cannot mount cephfs after some disaster recovery

2016-03-01 Thread Shinobu Kinjo
Thanks, John. Your additional explanation would be much help for the community. Cheers, S - Original Message - From: "John Spray" To: "1" <10...@candesoft.com> Cc: "ceph-users" Sent: Tuesday, March 1, 2016 6:32:34 PM Subject: Re: [ceph-users] Cannot mount cephfs after some disaster

[ceph-users] omap support with erasure coded pools

2016-03-01 Thread Puerta Treceno, Jesus Ernesto (Nokia - ES)
Hi cephers, It seems that explicit omap insertions are not supported by EC pools (errno EOPNOTSUPP): $ rados -p setomapval 'dummy_obj' 'test_key' 'test_value' error setting omap value cdvr_ec/dummy/test: (95) Operation not supported When trying the same with replicated pools, the above comm

[ceph-users] MDS memory sizing

2016-03-01 Thread Dietmar Rieder
Dear ceph users, I'm in the very initial phase of planning a ceph cluster an have a question regarding the RAM recommendation for an MDS. According to the ceph docs the minimum amount of RAM should be "1 GB minimum per daemon". Is this per OSD in the cluster or per MDS in the cluster? I plan to

Re: [ceph-users] Cannot mount cephfs after some disaster recovery

2016-03-01 Thread Francois Lafont
Hi, On 01/03/2016 10:32, John Spray wrote: > As Zheng has said, that last number is the "max_mds" setting. And what is the meaning of the first and the second number below? mdsmap e21038: 1/1/0 up {0=HK-IDC1-10-1-72-160=up:active} ^ ^ -- François Lafont

Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread min fang
I can use the following command to change parameter, for example as the following, but not sure whether it will work. ceph --admin-daemon /var/run/ceph/ceph-mon.openpower-0.asok config set rbd_readahead_disable_after_bytes 0 2016-03-01 15:07 GMT+08:00 Tom Christensen : > If you are mapping the

Re: [ceph-users] MDS memory sizing

2016-03-01 Thread Simon Hallam
Hi Dietmar, I asked the same question not long ago, this this may be relevant to you: http://www.spinics.net/lists/ceph-users/msg24359.html Cheers, Si > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Dietmar Rieder > Sent: 01 March 2016 1

Re: [ceph-users] MDS memory sizing

2016-03-01 Thread Yan, Zheng
On Tue, Mar 1, 2016 at 7:28 PM, Dietmar Rieder wrote: > Dear ceph users, > > > I'm in the very initial phase of planning a ceph cluster an have a > question regarding the RAM recommendation for an MDS. > > According to the ceph docs the minimum amount of RAM should be "1 GB > minimum per daemon".

Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread Adrien Gillard
As Tom stated, RBD cache only works if your client is using librbd (KVM clients for instance). Using the kernel RBD client, one of the parameter you can tune to optimize sequential read is increasing /sys/class/block/rbd4/queue/read_ahead_kb Adrien On Tue, Mar 1, 2016 at 12:48 PM, min fang wro

Re: [ceph-users] ceph RGW NFS

2016-03-01 Thread Daniel Gryniewicz
On 02/28/2016 08:36 PM, David Wang wrote: Hi All, How the progress of NFS on RGW? Does it released on Infernalis? The contents of NFS on RGW is http://tracker.ceph.com/projects/ceph/wiki/RGW_-_NFS The FSAL has been integrated into upstream Ganesha (https://github.com/nfs-ganesha/nfs-gan

Re: [ceph-users] ceph RGW NFS

2016-03-01 Thread Yehuda Sadeh-Weinraub
On Tue, Mar 1, 2016 at 7:23 AM, Daniel Gryniewicz wrote: > On 02/28/2016 08:36 PM, David Wang wrote: >> >> Hi All, >> How the progress of NFS on RGW? Does it released on Infernalis? The >> contents of NFS on RGW is >> http://tracker.ceph.com/projects/ceph/wiki/RGW_-_NFS >> >> > > The FSAL has

Re: [ceph-users] Replacing OSD drive without rempaping pg's

2016-03-01 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 With a fresh disk, you will need to remove the old key in ceph (ceph auth del osd.X) and the old osd (ceph osd rm X), but I think you can leave the CRUSH map alone (don't do ceph osd crush rm osd.X) so that there isn't any additional data movement (i

Re: [ceph-users] Cannot mount cephfs after some disaster recovery

2016-03-01 Thread John Spray
On Tue, Mar 1, 2016 at 11:41 AM, Francois Lafont wrote: > Hi, > > On 01/03/2016 10:32, John Spray wrote: > >> As Zheng has said, that last number is the "max_mds" setting. > > And what is the meaning of the first and the second number below? > > mdsmap e21038: 1/1/0 up {0=HK-IDC1-10-1-72-160=u

[ceph-users] babeltrace and lttng-ust headed to EPEL 7

2016-03-01 Thread Ken Dreyer
lttng is destined for EPEL 7, so we will finally have lttng tracepoints in librbd for our EL7 Ceph builds, as we've done with the EL6 builds. https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-200bd827c6 https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-8c74b0b27f If you are running

Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-03-01 Thread Ken Dreyer
In theory the RPM should contain either the init script, or the systemd .service files, but not both. If that's not the case, you can file a bug @ http://tracker.ceph.com/ . Patches are even better! - Ken On Tue, Mar 1, 2016 at 2:36 AM, Florent B wrote: > By the way, why /etc/init.d/ceph script

[ceph-users] Manual or fstab mount on Ceph FS

2016-03-01 Thread Jose M
Hi guys, easy question. If i need to mount a ceph FS in client (manual mount or fstab), but this client won't be part of the ceph cluster (neither osd nor monitor node), do i still have to run the "ceph-deploy install ceph-client" command from the ceph admin node or there is another way? I

[ceph-users] blocked i/o on rbd device

2016-03-01 Thread Randy Orr
Hello, I am running the following: ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) ubuntu 14.04 with kernel 3.19.0-49-generic #55~14.04.1-Ubuntu SMP For this use case I am mapping and mounting an rbd using the kernel client and exporting the ext4 filesystem via NFS to a number of c

[ceph-users] Upgrade to INFERNALIS

2016-03-01 Thread Garg, Pankaj
Hi, I have upgraded my cluster from 0.94.4 as recommended to the just released Infernalis (9.2.1) Update directly (skipped 9.2.0). I installed the packaged on each system, manually (.deb files that I built). After that I followed the steps : Stop ceph-all chown -R ceph:ceph /var/lib/ceph start

Re: [ceph-users] Upgrade to INFERNALIS

2016-03-01 Thread Francois Lafont
Hi, On 02/03/2016 00:12, Garg, Pankaj wrote: > I have upgraded my cluster from 0.94.4 as recommended to the just released > Infernalis (9.2.1) Update directly (skipped 9.2.0). > I installed the packaged on each system, manually (.deb files that I built). > > After that I followed the steps : >

Re: [ceph-users] User Interface

2016-03-01 Thread Vlad Blando
Any ideas guys? ᐧ /Vlad On Tue, Mar 1, 2016 at 10:42 AM, Vlad Blando wrote: > Hi, > > We already have a user interface that is admin facing (ex. calamari, > kraken, ceph-dash), how about a client facing interface, that can cater for > both block and object store. For object store I can use Swif

Re: [ceph-users] ceph RGW NFS

2016-03-01 Thread David Wang
Thanks for reply. I will wait for Jewel. 2016-03-02 0:29 GMT+08:00 Yehuda Sadeh-Weinraub : > On Tue, Mar 1, 2016 at 7:23 AM, Daniel Gryniewicz wrote: > > On 02/28/2016 08:36 PM, David Wang wrote: > >> > >> Hi All, > >> How the progress of NFS on RGW? Does it released on Infernalis? The > >>

Re: [ceph-users] v0.94.6 Hammer released

2016-03-01 Thread Chris Dunlop
Hi, The "old list of supported platforms" includes debian wheezy. Will v0.94.6 be built for this? Chris On Mon, Feb 29, 2016 at 10:57:53AM -0500, Sage Weil wrote: > The intention was to continue building stable releases (0.94.x) on the old > list of supported platforms (which inclues 12.04 and

Re: [ceph-users] Cannot mount cephfs after some disaster recovery

2016-03-01 Thread Francois Lafont
On 01/03/2016 18:14, John Spray wrote: >> And what is the meaning of the first and the second number below? >> >> mdsmap e21038: 1/1/0 up {0=HK-IDC1-10-1-72-160=up:active} >>^ ^ > > Your whitespace got lost here I think, but I guess you're talking > about the 1/1 part. Ye

Re: [ceph-users] Replacing OSD drive without rempaping pg's

2016-03-01 Thread Lindsay Mathieson
On 02/03/16 02:41, Robert LeBlanc wrote: With a fresh disk, you will need to remove the old key in ceph (ceph auth del osd.X) and the old osd (ceph osd rm X), but I think you can leave the CRUSH map alone (don't do ceph osd crush rm osd.X) so that there isn't any additional data movement (if ther

[ceph-users] Restrict cephx commands

2016-03-01 Thread chris holcombe
Hey Ceph Users! I'm wondering if it's possible to restrict the ceph keyring to only being able to run certain commands. I think the answer to this is no but I just wanted to ask. I haven't seen any documentation indicating whether or not this is possible. Anyone know? Thanks, Chris ___

Re: [ceph-users] Manual or fstab mount on Ceph FS

2016-03-01 Thread Yan, Zheng
On Wed, Mar 2, 2016 at 4:57 AM, Jose M wrote: > Hi guys, easy question. > > If i need to mount a ceph FS in client (manual mount or fstab), but this > client won't be part of the ceph cluster (neither osd nor monitor node), do > i still have to run the "ceph-deploy install ceph-client" command

[ceph-users] INFARNALIS with 64K Kernel PAGES

2016-03-01 Thread Garg, Pankaj
Hi, Is there a known issue with using 64K Kernel PAGE_SIZE? I am using ARM64 systems, and I upgraded from 0.94.4 to 9.2.1 today. The system which was on 4K page size, came up OK and OSDs are all online. Systems with 64K Page size are all seeing the OSDs crash with following stack: Begin dump of r

Re: [ceph-users] Upgrade to INFERNALIS

2016-03-01 Thread Garg, Pankaj
Thanks François. That was the issue. After changing Journal partition permissions, things look better now. -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Francois Lafont Sent: Tuesday, March 01, 2016 4:06 PM To: ceph-users@lists.ceph.com Subje

Re: [ceph-users] INFARNALIS with 64K Kernel PAGES

2016-03-01 Thread Somnath Roy
Did you recreated OSDs on this setup meaning did you do mkfs with 64K page size ? From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg, Pankaj Sent: Tuesday, March 01, 2016 9:07 PM To: ceph-users@lists.ceph.com Subject: [ceph-users] INFARNALIS with 64K Kernel PAGES Hi,

Re: [ceph-users] INFARNALIS with 64K Kernel PAGES

2016-03-01 Thread Garg, Pankaj
The OSDS were created with 64K page size, and mkfs was done with the same size. After upgrade, I have not changed anything on the machine (except applied the ownership fix for files for user ceph:ceph) From: Somnath Roy [mailto:somnath@sandisk.com] Sent: Tuesday, March 01, 2016 9:32 PM To: Ga

Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread min fang
thanks, with your help, I set the read ahead parameter. What is the cache parameter for kernel module rbd? Such as: 1) what is the cache size? 2) Does it support write back? 3) Will read ahead be disabled if max bytes has been read into cache? (similar the concept as "rbd_readahead_disable_after_by

Re: [ceph-users] rbd cache did not help improve performance

2016-03-01 Thread Josh Durgin
On 03/01/2016 10:03 PM, min fang wrote: thanks, with your help, I set the read ahead parameter. What is the cache parameter for kernel module rbd? Such as: 1) what is the cache size? 2) Does it support write back? 3) Will read ahead be disabled if max bytes has been read into cache? (similar the

Re: [ceph-users] INFARNALIS with 64K Kernel PAGES

2016-03-01 Thread Somnath Roy
Sorry, I missed that you are upgrading from Hammer...I think it is probably a bug introduced in post hammer..Here is why it is happening IMO.. In hammer: - https://github.com/ceph/ceph/blob/hammer/src/os/FileJournal.cc#L158 In Master/Infernalis/Jewel: ---