Re: [ceph-users] Modification Time of RBD Images

2017-03-24 Thread Dongsheng Yang
Hi jason, do you think this is a good feature for rbd? maybe we can implement a "rbd stat" command to show atime, mtime and ctime of an image. Yang On 03/23/2017 08:36 PM, Christoph Adomeit wrote: Hi, no i did not enable the journalling feature since we do not use mirroring. On Thu, Mar

Re: [ceph-users] CentOS7 Mounting Problem

2017-03-24 Thread Georgios Dimitrakakis
Hi Tom and thanks a lot for the feedback. Indeed my root filesystem is on an LVM volume and I am currently running CentOS 7.3.1611 with kernel 3.10.0-514.10.2.el7.x86_64 and the ceph version is 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f) The 60-ceph-by-parttypeuuid.rules on the system i

Re: [ceph-users] Recompiling source code - to find exact RPM

2017-03-24 Thread Piotr Dałek
On 03/23/2017 06:10 PM, nokia ceph wrote: Hello Piotr, I didn't understand, could you please elaborate about this procedure as mentioned in the last update. It would be really helpful if you share any useful link/doc to understand what you actually meant. Yea correct, normally we do this proced

Re: [ceph-users] ceph-rest-api's behavior

2017-03-24 Thread Brad Hubbard
On Fri, Mar 24, 2017 at 4:06 PM, Mika c wrote: > Hi all, > Same question with CEPH 10.2.3 and 11.2.0. > Is this command only for client.admin ? > > client.symphony >key: AQD0tdRYjhABEhAAaG49VhVXBTw0MxltAiuvgg== >caps: [mon] allow * >caps: [osd] allow * > > Traceb

Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread 邱宏瑋
Hi Deepak. Thansk your reply, I try to use gperf to profile the ceph-osd with basic mode (without RDMA) and you can see the result in the following link. http://imgur.com/a/SJgEL In the gperf result, we can see the whole CPU usage can divided into three significant part, (Network: 36%, FileStore

Re: [ceph-users] Recompiling source code - to find exact RPM

2017-03-24 Thread nokia ceph
Brad, cool now we are on the same track :) So whatever change we made after this location src/* as it mapped to respective rpm correct? For eg:- src/osd/* -- ceph-osd src/common - ceph-common src/mon - ceph-mon src/mgr - ceph-mgr Since we are using bluestore with kraken, I though to disable t

[ceph-users] ceph 'tech' question

2017-03-24 Thread mj
Hi all, Something that I am curious about: Suppose I have a three-server cluster, all with identical OSDs configuration, and also a replication factor of three. That would mean (I guess) that all 3 servers have a copy of everything in the ceph pool. My question: given that every machine ha

Re: [ceph-users] Recompiling source code - to find exact RPM

2017-03-24 Thread nokia ceph
Piotr, thanks for the info. Yea this method is time saving, but we are not started testing with build from source method. We will consider this for our next part of testing :) On Fri, Mar 24, 2017 at 1:17 PM, Piotr Dałek wrote: > On 03/23/2017 06:10 PM, nokia ceph wrote: > >> Hello Piotr, >> >>

Re: [ceph-users] ceph 'tech' question

2017-03-24 Thread ulembke
Hi, no ceph read from the primary PG - so your reads are app. 33% local. And why? better distibution of read-access. Udo Am 2017-03-24 09:49, schrieb mj: Hi all, Something that I am curious about: Suppose I have a three-server cluster, all with identical OSDs configuration, and also a replic

Re: [ceph-users] ceph 'tech' question

2017-03-24 Thread mj
On 03/24/2017 10:33 AM, ulem...@polarzone.de wrote: And why? better distibution of read-access. Udo Ah yes. On the other hand... In the case of specific often-requested data in your pool, the primary PG will have to handle all those requests, and in that case using a local copy would have

Re: [ceph-users] ceph-rest-api's behavior

2017-03-24 Thread Mika c
Hi Brad, Thanks for your reply. The environment already created keyring file and put it in /etc/ceph but not working. I have to write config into ceph.conf like below. ---ceph.conf start--- [client.symphony] log_file = / ​var/log/ceph/rest-api.log keyring = /etc/ceph/ceph.client.symp

Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread Haomai Wang
the content of ceph.conf ? On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋) wrote: > Hi Deepak. > > Thansk your reply, > > I try to use gperf to profile the ceph-osd with basic mode (without RDMA) > and you can see the result in the following link. > http://imgur.com/a/SJgEL > > In the gper

Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread 邱宏瑋
Hi. Basic [global] fsid = 0612cc7e-6239-456c-978b-b4df781fe831 mon initial members = ceph-1,ceph-2,ceph-3 mon host = 10.0.0.15,10.0.0.16,10.0.0.17 osd pool default size = 2 osd pool default pg num = 1024 osd pool default pgp num = 1024 RDMA [global] fsid = 0612cc7e-6239-456c-978b-b4df781fe831

Re: [ceph-users] Setting a different number of minimum replicas for reading and writing operations

2017-03-24 Thread Sergio A. de Carvalho Jr.
Ok, thanks for confirming. On Thu, Mar 23, 2017 at 7:32 PM, Gregory Farnum wrote: > Nope. This is a theoretical possibility but would take a lot of code > change that nobody has embarked upon yet. > -Greg > On Wed, Mar 22, 2017 at 2:16 PM Sergio A. de Carvalho Jr. < > scarvalh...@gmail.com> wrot

Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread Haomai Wang
OH.. you can refer to performance related threads on ceph/ceph-devel maillist to get ssd-optimized ceph.conf. the default conf lack of good support on ssd. On Fri, Mar 24, 2017 at 6:33 AM, Hung-Wei Chiu (邱宏瑋) wrote: > Hi. > > Basic > [global] > > fsid = 0612cc7e-6239-456c-978b-b4df781fe831 > mon

Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread 邱宏瑋
Hi Thanks your help! I will try it and let u know results after I have finished it. Thanks!! Haomai Wang 於 2017年3月24日 週五,下午6:44寫道: > OH.. you can refer to performance related threads on ceph/ceph-devel > maillist to get ssd-optimized ceph.conf. the default conf lack of good > support on ssd. >

[ceph-users] Questions on rbd-mirror

2017-03-24 Thread Fulvio Galeazzi
Hallo, apologies for my (silly) questions, I did try to find some doc on rbd-mirror but was unable to, apart from a number of pages explaining how to install it. My environment is CenOS7 and Ceph 10.2.5. Can anyone help me understand a few minor things: - is there a cleaner way to configure

Re: [ceph-users] ceph-rest-api's behavior

2017-03-24 Thread Brad Hubbard
On Fri, Mar 24, 2017 at 8:20 PM, Mika c wrote: > Hi Brad, > Thanks for your reply. The environment already created keyring file and > put it in /etc/ceph but not working. What was it called? > I have to write config into ceph.conf like below. > > ---ceph.conf start--- > [client.symp

[ceph-users] ceph pg dump - last_scrub last_deep_scrub

2017-03-24 Thread Laszlo Budai
Hello, can someone tell me the meaning of the last_scrub and last_deep_scrub values from the ceph pg dump output? I could not find it with google nor in the documentation. for example I can see here the last_scrub being 61092'4385, and the last_deep_scrub=61086'4379 pg_stat objects mip

Re: [ceph-users] How to think a two different disk's technologies architecture

2017-03-24 Thread Alejandro Comisario
thanks for the recommendations so far. any one with more experiences and thoughts? best On Mar 23, 2017 16:36, "Maxime Guyot" wrote: > Hi Alexandro, > > As I understand you are planning NVMe for Journal for SATA HDD and > collocated journal for SATA SSD? > > Option 1: > - 24x SATA SSDs per serv

[ceph-users] memory usage ceph jewel OSDs

2017-03-24 Thread Manuel Lausch
Hello, in the last days I try to figure out why my OSDs needs a huge amount of RAM. (1,2 - 4 GB). With this my System memory is on limit. At beginning I thougt it is because of huge amount of backfilling (some disks died). But now since a few days all is good but the memory keeps at its level. Res

Re: [ceph-users] Object Map Costs (Was: Snapshot Costs (Was: Re: Pool Sizes))

2017-03-24 Thread Kjetil Jørgensen
Hi, Depending on how you plan to use the omap - you might also want to avoid a large number of key/value pairs as well. CephFS got it's directory fragment size capped due to large omaps being painful to deal with (see: http://tracker.ceph.com/issues/16164 and http://tracker.ceph.com/issues/16177).

Re: [ceph-users] Modification Time of RBD Images

2017-03-24 Thread Kjetil Jørgensen
Hi, YMMV, riddled with assumptions (image is image-format=2, has one ext4 filesystem, no partition table, ext4 superblock starts at 0x400 and probably a whole boatload of other stuff, I don't know when ext4 updates s_wtime of it's superblock, nor if it's actually the superblock last write or last

[ceph-users] default pools gone. problem?

2017-03-24 Thread mj
Hi, On the docs on ppols http://docs.ceph.com/docs/cuttlefish/rados/operations/pools/ it says: The default pools are: *data *metadata *rbd My ceph install has only ONE pool called "ceph-storage", the others are gone. (probably deleted?) Is not having those default pools a prob

Re: [ceph-users] default pools gone. problem?

2017-03-24 Thread Bob R
You can operate without the default pools without issue. On Fri, Mar 24, 2017 at 1:23 PM, mj wrote: > Hi, > > On the docs on ppols http://docs.ceph.com/docs/cutt > lefish/rados/operations/pools/ it says: > > The default pools are: > > *data > *metadata > *rbd > > My ceph install has

Re: [ceph-users] default pools gone. problem?

2017-03-24 Thread mj
On 03/24/2017 10:13 PM, Bob R wrote: You can operate without the default pools without issue. Thanks! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] cephFS mounted on client shows space used -- when there is nothing used on the FS

2017-03-24 Thread Deepak Naidu
I have cephFS cluster. Below is the df from a client node. Question is why does the df command when mounted using ceph-fuse or ceph-kernel mount shows "used space" when there is nothing used(empty -- no files or directories) [root@storage ~]# df -h Filesystem

Re: [ceph-users] ceph pg dump - last_scrub last_deep_scrub

2017-03-24 Thread Brad Hubbard
On Fri, Mar 24, 2017 at 10:12 PM, Laszlo Budai wrote: > Hello, > > can someone tell me the meaning of the last_scrub and last_deep_scrub values > from the ceph pg dump output? > I could not find it with google nor in the documentation. > > for example I can see here the last_scrub being 61092

Re: [ceph-users] How to think a two different disk's technologies architecture

2017-03-24 Thread Alex Gorbachev
On Fri, Mar 24, 2017 at 10:04 AM Alejandro Comisario wrote: > thanks for the recommendations so far. > any one with more experiences and thoughts? > > best > On the network side, 25, 40, 56 and maybe soon 100 Gbps can now be fairly affordable, and simplify the architecture for the high throughpu

Re: [ceph-users] Preconditioning an RBD image

2017-03-24 Thread Alex Gorbachev
On Wed, Mar 22, 2017 at 6:05 AM Peter Maloney < peter.malo...@brockmann-consult.de> wrote: > Does iostat (eg. iostat -xmy 1 /dev/sd[a-z]) show high util% or await > during these problems? > It does, from watching atop. > > Ceph filestore requires lots of metadata writing (directory splitting f