Re: [ceph-users] Modification Time of RBD Images

2017-03-24 Thread Dongsheng Yang

Hi jason,

do you think this is a good feature for rbd?
maybe we can implement a "rbd stat" command
to show atime, mtime and ctime of an image.

Yang

On 03/23/2017 08:36 PM, Christoph Adomeit wrote:

Hi,

no i did not enable the journalling feature since we do not use mirroring.


On Thu, Mar 23, 2017 at 08:10:05PM +0800, Dongsheng Yang wrote:

Did you enable the journaling feature?

On 03/23/2017 07:44 PM, Christoph Adomeit wrote:

Hi Yang,

I mean "any write" to this image.

I am sure we have a lot of not-used-anymore rbd images in our pool and I am 
trying to identify them.

The mtime would be a good hint to show which images might be unused.

Christoph

On Thu, Mar 23, 2017 at 07:32:49PM +0800, Dongsheng Yang wrote:

Hi Christoph,

On 03/23/2017 07:16 PM, Christoph Adomeit wrote:

Hello List,

i am wondering if there is meanwhile an easy method in ceph to find more 
information about rbd-images.

For example I am interested in the modification time of an rbd image.

Do you mean some metadata changing? such as resize?

Or any write to this image?

Thanx
Yang

I found some posts from 2015 that say we have to go over all the objects of an 
rbd image and find the newest mtime put this is not a preferred solution for 
me. It takes to much time and too many system resources.

Any Ideas ?

Thanks
   Christoph


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS7 Mounting Problem

2017-03-24 Thread Georgios Dimitrakakis

Hi Tom and thanks a lot for the feedback.

Indeed my root filesystem is on an LVM volume and I am currently 
running CentOS 7.3.1611 with kernel 3.10.0-514.10.2.el7.x86_64 and the 
ceph version is 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)


The 60-ceph-by-parttypeuuid.rules on the system is the same is the one 
on the bug you 've mentioned but unfortunately it still doesn't work.


So, are there any more ideas on how to further debug it?

Best,

G.



Are you running the CentOS filesystem as LVM? This
(http://tracker.ceph.com/issues/16351 [1]) still seems to be an issue
on CentOS 7 that I've seen myself too. After migrating to a standard
filesystem layout (i.e. no LVM) the issue disappeared.

Regards,

 Tom

-

FROM: ceph-users  on behalf of Georgios Dimitrakakis
 SENT: Thursday, March 23, 2017 10:21:34 PM
 TO: ceph-us...@ceph.com
 SUBJECT: [ceph-users] CentOS7 Mounting Problem

Hello Ceph community!

 I would like some help with a new CEPH installation.

 I have install Jewel on CentOS7 and after the reboot my OSDs are not
 mount automatically and as a consequence ceph is not operating
 normally...

 What can I do?

 Could you please help me solve the problem?

 Regards,

 G.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [2]


Links:
--
[1] http://tracker.ceph.com/issues/16351
[2] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recompiling source code - to find exact RPM

2017-03-24 Thread Piotr Dałek

On 03/23/2017 06:10 PM, nokia ceph wrote:

Hello Piotr,

I didn't understand, could you please elaborate about this procedure as
mentioned in the last update.  It would be really helpful if you share any
useful link/doc to understand what you actually meant. Yea correct, normally
we do this procedure but it takes more time. But here my intention is to how
to find out the rpm which caused the change. I think we are in opposite
direction.



Here's described how to build Ceph from source ("Build Ceph" paragraph): 
http://docs.ceph.com/docs/master/install/build-ceph/
And here's how to install the built binaries: 
http://docs.ceph.com/docs/master/install/install-storage-cluster/#installing-a-build
That's enough to build and install Ceph binaries on a specific host without 
building RPMs. After doing a code change, "make install" is enough to update 
binaries, restart of Ceph daemons is still required.


--
Piotr Dałek
piotr.da...@corp.ovh.com
https://www.ovh.com/us/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-rest-api's behavior

2017-03-24 Thread Brad Hubbard
On Fri, Mar 24, 2017 at 4:06 PM, Mika c  wrote:
> Hi all,
>  Same question with CEPH 10.2.3 and 11.2.0.
>   Is this command only for client.admin ?
>
> client.symphony
>key: AQD0tdRYjhABEhAAaG49VhVXBTw0MxltAiuvgg==
>caps: [mon] allow *
>caps: [osd] allow *
>
> Traceback (most recent call last):
>  File "/usr/bin/ceph-rest-api", line 43, in 
>rest,
>  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 504, in
> generate_a
> pp
>addr, port = api_setup(app, conf, cluster, clientname, clientid, args)
>  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 106, in
> api_setup
>app.ceph_cluster.connect()
>  File "rados.pyx", line 811, in rados.Rados.connect
> (/tmp/buildd/ceph-11.2.0/obj-x
> 86_64-linux-gnu/src/pybind/rados/pyrex/rados.c:10178)
> rados.ObjectNotFound: error connecting to the cluster

# strace -eopen /bin/ceph-rest-api |& grep keyring
open("/etc/ceph/ceph.client.restapi.keyring", O_RDONLY) = -1 ENOENT
(No such file or directory)
open("/etc/ceph/ceph.keyring", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/etc/ceph/keyring", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/etc/ceph/keyring.bin", O_RDONLY) = -1 ENOENT (No such file or directory)

# ceph auth get-or-create client.restapi mon 'allow *' mds 'allow *'
osd 'allow *' >/etc/ceph/ceph.client.restapi.keyring

# /bin/ceph-rest-api
 * Running on http://0.0.0.0:5000/

>
>
>
> Best wishes,
> Mika
>
>
> 2016-03-03 12:25 GMT+08:00 Shinobu Kinjo :
>>
>> Yes.
>>
>> On Wed, Jan 27, 2016 at 1:10 PM, Dan Mick  wrote:
>> > Is the client.test-admin key in the keyring read by ceph-rest-api?
>> >
>> > On 01/22/2016 04:05 PM, Shinobu Kinjo wrote:
>> >> Does anyone have any idea about that?
>> >>
>> >> Rgds,
>> >> Shinobu
>> >>
>> >> - Original Message -
>> >> From: "Shinobu Kinjo" 
>> >> To: "ceph-users" 
>> >> Sent: Friday, January 22, 2016 7:15:36 AM
>> >> Subject: ceph-rest-api's behavior
>> >>
>> >> Hello,
>> >>
>> >> "ceph-rest-api" works greatly with client.admin.
>> >> But with client.test-admin which I created just after building the Ceph
>> >> cluster , it does not work.
>> >>
>> >>  ~$ ceph auth get-or-create client.test-admin mon 'allow *' mds 'allow
>> >> *' osd 'allow *'
>> >>
>> >>  ~$ sudo ceph auth list
>> >>  installed auth entries:
>> >>...
>> >>  client.test-admin
>> >>   key: AQCOVaFWTYr2ORAAKwruANTLXqdHOchkVvRApg==
>> >>   caps: [mds] allow *
>> >>   caps: [mon] allow *
>> >>   caps: [osd] allow *
>> >>
>> >>  ~$ ceph-rest-api -n client.test-admin
>> >>  Traceback (most recent call last):
>> >>File "/bin/ceph-rest-api", line 59, in 
>> >>  rest,
>> >>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line 504,
>> >> in generate_app
>> >>  addr, port = api_setup(app, conf, cluster, clientname, clientid,
>> >> args)
>> >>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line 106,
>> >> in api_setup
>> >>  app.ceph_cluster.connect()
>> >>File "/usr/lib/python2.7/site-packages/rados.py", line 485, in
>> >> connect
>> >>  raise make_ex(ret, "error connecting to the cluster")
>> >>  rados.ObjectNotFound: error connecting to the cluster
>> >>
>> >> # ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
>> >>
>> >> Is that expected behavior?
>> >> Or if I've missed anything, please point it out to me.
>> >>
>> >> Rgds,
>> >> Shinobu
>> >> ___
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >
>> >
>> > --
>> > Dan Mick
>> > Red Hat, Inc.
>> > Ceph docs: http://ceph.com/docs
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> Email:
>> shin...@linux.com
>> GitHub:
>> shinobu-x
>> Blog:
>> Life with Distributed Computational System based on OpenSource
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Cheers,
Brad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread 邱宏瑋
Hi Deepak.

Thansk your reply,

I try to use gperf to profile the ceph-osd with basic mode (without RDMA)
and you can see the result in the following link.
http://imgur.com/a/SJgEL

In the gperf result, we can see the whole CPU usage can divided into three
significant part, (Network: 36%, FileStore: 17% PG related:29%).
Do you have any idea about this?

Thanks.


Best Regards,

Hung-Wei Chiu(邱宏瑋)
--
Computer Center, Department of Computer Science
National Chiao Tung University

2017-03-24 14:31 GMT+08:00 Deepak Naidu :

> >>Now, the kernel can decrease the CPU cycle usages for network I/O
> processing because of RDMA enabled (right?).
>
> Yes, kernel should have comparatively free cycles when using RDMA over TCP
>
> >>does it means host can provide more CPU for other processing, such as
> Disk I/O ?
>
> This can be subjective, bcos let's say u have an IO process which takes 10
> mins over TCP then when using RDMA it might be 3 mins as your CPU cycles
> are less used by RDMA but if ur application process which provides the RDMA
> functionality might have processing cycles as well, which might use CPU
> cycles as well, example client or fuse plugins on the host getting it ?
>
>
> --
> Deepak
>
> > On Mar 23, 2017, at 11:10 PM, Hung-Wei Chiu (邱宏瑋) 
> wrote:
> >
> > Now, the kernel can decrease the CPU cycle usages for network I/O
> processing because of RDMA enabled (right?).
> > does it means host can provide more CPU for other processing, such as
> Disk I/O ?
>
> 
> ---
> This email message is for the sole use of the intended recipient(s) and
> may contain
> confidential information.  Any unauthorized review, use, disclosure or
> distribution
> is prohibited.  If you are not the intended recipient, please contact the
> sender by
> reply email and destroy all copies of the original message.
> 
> ---
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recompiling source code - to find exact RPM

2017-03-24 Thread nokia ceph
Brad, cool now we are on the same track :)

So whatever change we made after this location src/* as it mapped to
respective rpm correct?

For eg:-
src/osd/* -- ceph-osd
src/common - ceph-common
src/mon  - ceph-mon
src/mgr   - ceph-mgr

Since we are using bluestore with kraken, I though to disable the below
warning while triggering `ceph -s`

~~~
WARNING: the following dangerous and experimental features are enabled:
~~~

Here I made a comment in this file

>vim src/common/ceph_context.cc
307 //  if (!cct->_experimental_features.empty())
308 //  lderr(cct) << "WARNING: the following dangerous and
experimental features are enabled: "
309 // << cct->_experimental_features << dendl;

As per my assumption, the change should reflect in this binary
 "ceph-common"

But when I closely looked on librados library as these warning showing here
also.
#strings -a /usr/lib64/librados.so.2 | grep dangerous
WARNING: the following dangerous and experimental features are enabled: -->

Then I conclude for this change ceph-common and librados were required.

Please correct me if I'm wrong.

On Fri, Mar 24, 2017 at 5:41 AM, Brad Hubbard  wrote:

> Oh wow, I completely misunderstood your question.
>
> Yes, src/osd/PG.cc and src/osd/PG.h are compiled into the ceph-osd binary
> which
> is included in the ceph-osd rpm as you said in your OP.
>
> On Fri, Mar 24, 2017 at 3:10 AM, nokia ceph 
> wrote:
> > Hello Piotr,
> >
> > I didn't understand, could you please elaborate about this procedure as
> > mentioned in the last update.  It would be really helpful if you share
> any
> > useful link/doc to understand what you actually meant. Yea correct,
> normally
> > we do this procedure but it takes more time. But here my intention is to
> how
> > to find out the rpm which caused the change. I think we are in opposite
> > direction.
> >
> >>> But wouldn't be faster and/or more convenient if you would just
> recompile
> >>> binaries in-place (or use network symlinks)
> >
> > Thanks
> >
> >
> >
> > On Thu, Mar 23, 2017 at 6:47 PM, Piotr Dałek 
> > wrote:
> >>
> >> On 03/23/2017 02:02 PM, nokia ceph wrote:
> >>
> >>> Hello Piotr,
> >>>
> >>> We do customizing ceph code for our testing purpose. It's a part of our
> >>> R&D :)
> >>>
> >>> Recompiling source code will create 38 rpm's out of these I need to
> find
> >>> which one is the correct rpm which I made change in the source code.
> >>> That's
> >>> what I'm try to figure out.
> >>
> >>
> >> Yes, I understand that. But wouldn't be faster and/or more convenient if
> >> you would just recompile binaries in-place (or use network symlinks)
> instead
> >> of packaging entire Ceph and (re)installing its packages each time you
> do
> >> the change? Generating RPMs takes a while.
> >>
> >>
> >> --
> >> Piotr Dałek
> >> piotr.da...@corp.ovh.com
> >> https://www.ovh.com/us/
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Cheers,
> Brad
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph 'tech' question

2017-03-24 Thread mj

Hi all,

Something that I am curious about:

Suppose I have a three-server cluster, all with identical OSDs 
configuration, and also a replication factor of three.


That would mean (I guess) that all 3 servers have a copy of everything 
in the ceph pool.


My question: given that every machine has all the data, does that also 
imply that reads will be LOCAL on each machine?


I'm asking because I understand that each PG has one primary copy 
optionally with extra secondary copies. (depending on the replication 
factor)


I have the feeling that local reads will usually be faster than reads 
over the network.


And if this is not the case, then why not? :-)

Thanks for any insights or pointers!

MJ
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recompiling source code - to find exact RPM

2017-03-24 Thread nokia ceph
Piotr, thanks for the info.

Yea this method is time saving, but we are not started testing with build
from source method. We will consider this for our next part of testing :)

On Fri, Mar 24, 2017 at 1:17 PM, Piotr Dałek 
wrote:

> On 03/23/2017 06:10 PM, nokia ceph wrote:
>
>> Hello Piotr,
>>
>> I didn't understand, could you please elaborate about this procedure as
>> mentioned in the last update.  It would be really helpful if you share any
>> useful link/doc to understand what you actually meant. Yea correct,
>> normally
>> we do this procedure but it takes more time. But here my intention is to
>> how
>> to find out the rpm which caused the change. I think we are in opposite
>> direction.
>>
>
>
> Here's described how to build Ceph from source ("Build Ceph" paragraph):
> http://docs.ceph.com/docs/master/install/build-ceph/
> And here's how to install the built binaries:
> http://docs.ceph.com/docs/master/install/install-storage-
> cluster/#installing-a-build
> That's enough to build and install Ceph binaries on a specific host
> without building RPMs. After doing a code change, "make install" is enough
> to update binaries, restart of Ceph daemons is still required.
>
>
> --
> Piotr Dałek
> piotr.da...@corp.ovh.com
> https://www.ovh.com/us/
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 'tech' question

2017-03-24 Thread ulembke

Hi,
no ceph read from the primary PG - so your reads are app. 33% local.

And why? better distibution of read-access.

Udo

Am 2017-03-24 09:49, schrieb mj:

Hi all,

Something that I am curious about:

Suppose I have a three-server cluster, all with identical OSDs
configuration, and also a replication factor of three.

That would mean (I guess) that all 3 servers have a copy of everything
in the ceph pool.

My question: given that every machine has all the data, does that also
imply that reads will be LOCAL on each machine?

I'm asking because I understand that each PG has one primary copy
optionally with extra secondary copies. (depending on the replication
factor)

I have the feeling that local reads will usually be faster than reads
over the network.

And if this is not the case, then why not? :-)

Thanks for any insights or pointers!

MJ
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 'tech' question

2017-03-24 Thread mj


On 03/24/2017 10:33 AM, ulem...@polarzone.de wrote:

And why? better distibution of read-access.

Udo


Ah yes.

On the other hand... In the case of specific often-requested data in 
your pool, the primary PG will have to handle all those requests, and in 
that case using a local copy would have benefits.


Anyway, thanks for your reply. :-)


MJ
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-rest-api's behavior

2017-03-24 Thread Mika c
Hi Brad,
 Thanks for your reply. The environment already created keyring file
and put it in /etc/ceph but not working.
I have to write config into ceph.conf like below.

---ceph.conf start---
[client.symphony]
log_file = /
​var/log/ceph/rest-api.log

keyring = /etc/ceph/ceph.client.symphony
public addr =
​0.0.0.0​
:5
​000

restapi base url = /api/v0.1
---ceph.conf
​end​
---
​


Another question, have I must setting capabilities for this client like
admin ?
But I just want to take some information like health or df​.
​​

If this client setting
for a particular
​ ​
capabilities
​ like..​
---
---

client.symphony
   key: AQBP8NRYGehDKRAAzyChAvAivydLqRBsHeTPjg==
   caps: [mon] allow r
   caps: [osd] allow r
​x​
---
​
---​
​
Error list:
Traceback (most recent call last):
 File "/usr/bin/ceph-rest-api", line 59, in 
   rest,
 File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 495, in
generate_a
pp
   addr, port = api_setup(app, conf, cluster, clientname, clientid, args)
 File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 146, in
api_setup
   target=('osd', int(osdid)))
 File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 84, in
get_command
_descriptions
   raise EnvironmentError(ret, err)
EnvironmentError: [Errno -1] Can't get command descriptions:
​




Best wishes,
Mika


2017-03-24 16:21 GMT+08:00 Brad Hubbard :

> On Fri, Mar 24, 2017 at 4:06 PM, Mika c  wrote:
> > Hi all,
> >  Same question with CEPH 10.2.3 and 11.2.0.
> >   Is this command only for client.admin ?
> >
> > client.symphony
> >key: AQD0tdRYjhABEhAAaG49VhVXBTw0MxltAiuvgg==
> >caps: [mon] allow *
> >caps: [osd] allow *
> >
> > Traceback (most recent call last):
> >  File "/usr/bin/ceph-rest-api", line 43, in 
> >rest,
> >  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 504, in
> > generate_a
> > pp
> >addr, port = api_setup(app, conf, cluster, clientname, clientid, args)
> >  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 106, in
> > api_setup
> >app.ceph_cluster.connect()
> >  File "rados.pyx", line 811, in rados.Rados.connect
> > (/tmp/buildd/ceph-11.2.0/obj-x
> > 86_64-linux-gnu/src/pybind/rados/pyrex/rados.c:10178)
> > rados.ObjectNotFound: error connecting to the cluster
>
> # strace -eopen /bin/ceph-rest-api |& grep keyring
> open("/etc/ceph/ceph.client.restapi.keyring", O_RDONLY) = -1 ENOENT
> (No such file or directory)
> open("/etc/ceph/ceph.keyring", O_RDONLY) = -1 ENOENT (No such file or
> directory)
> open("/etc/ceph/keyring", O_RDONLY) = -1 ENOENT (No such file or
> directory)
> open("/etc/ceph/keyring.bin", O_RDONLY) = -1 ENOENT (No such file or
> directory)
>
> # ceph auth get-or-create client.restapi mon 'allow *' mds 'allow *'
> osd 'allow *' >/etc/ceph/ceph.client.restapi.keyring
>
> # /bin/ceph-rest-api
>  * Running on http://0.0.0.0:5000/
>
> >
> >
> >
> > Best wishes,
> > Mika
> >
> >
> > 2016-03-03 12:25 GMT+08:00 Shinobu Kinjo :
> >>
> >> Yes.
> >>
> >> On Wed, Jan 27, 2016 at 1:10 PM, Dan Mick  wrote:
> >> > Is the client.test-admin key in the keyring read by ceph-rest-api?
> >> >
> >> > On 01/22/2016 04:05 PM, Shinobu Kinjo wrote:
> >> >> Does anyone have any idea about that?
> >> >>
> >> >> Rgds,
> >> >> Shinobu
> >> >>
> >> >> - Original Message -
> >> >> From: "Shinobu Kinjo" 
> >> >> To: "ceph-users" 
> >> >> Sent: Friday, January 22, 2016 7:15:36 AM
> >> >> Subject: ceph-rest-api's behavior
> >> >>
> >> >> Hello,
> >> >>
> >> >> "ceph-rest-api" works greatly with client.admin.
> >> >> But with client.test-admin which I created just after building the
> Ceph
> >> >> cluster , it does not work.
> >> >>
> >> >>  ~$ ceph auth get-or-create client.test-admin mon 'allow *' mds
> 'allow
> >> >> *' osd 'allow *'
> >> >>
> >> >>  ~$ sudo ceph auth list
> >> >>  installed auth entries:
> >> >>...
> >> >>  client.test-admin
> >> >>   key: AQCOVaFWTYr2ORAAKwruANTLXqdHOchkVvRApg==
> >> >>   caps: [mds] allow *
> >> >>   caps: [mon] allow *
> >> >>   caps: [osd] allow *
> >> >>
> >> >>  ~$ ceph-rest-api -n client.test-admin
> >> >>  Traceback (most recent call last):
> >> >>File "/bin/ceph-rest-api", line 59, in 
> >> >>  rest,
> >> >>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line
> 504,
> >> >> in generate_app
> >> >>  addr, port = api_setup(app, conf, cluster, clientname, clientid,
> >> >> args)
> >> >>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line
> 106,
> >> >> in api_setup
> >> >>  app.ceph_cluster.connect()
> >> >>File "/usr/lib/python2.7/site-packages/rados.py", line 485, in
> >> >> connect
> >> >>  raise make_ex(ret, "error connecting to the cluster")
> >> >>  rados.ObjectNotFound: error connecting to the cluster
> >> >>
> >> >> # ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
> >> >>
> >> >> I

Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread Haomai Wang
the content of  ceph.conf ?

On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋) 
wrote:

> Hi Deepak.
>
> Thansk your reply,
>
> I try to use gperf to profile the ceph-osd with basic mode (without RDMA)
> and you can see the result in the following link.
> http://imgur.com/a/SJgEL
>
> In the gperf result, we can see the whole CPU usage can divided into three
> significant part, (Network: 36%, FileStore: 17% PG related:29%).
> Do you have any idea about this?
>
> Thanks.
>
>
> Best Regards,
>
> Hung-Wei Chiu(邱宏瑋)
> --
> Computer Center, Department of Computer Science
> National Chiao Tung University
>
> 2017-03-24 14:31 GMT+08:00 Deepak Naidu :
>
>> >>Now, the kernel can decrease the CPU cycle usages for network I/O
>> processing because of RDMA enabled (right?).
>>
>> Yes, kernel should have comparatively free cycles when using RDMA over TCP
>>
>> >>does it means host can provide more CPU for other processing, such as
>> Disk I/O ?
>>
>> This can be subjective, bcos let's say u have an IO process which takes
>> 10 mins over TCP then when using RDMA it might be 3 mins as your CPU cycles
>> are less used by RDMA but if ur application process which provides the RDMA
>> functionality might have processing cycles as well, which might use CPU
>> cycles as well, example client or fuse plugins on the host getting it ?
>>
>>
>> --
>> Deepak
>>
>> > On Mar 23, 2017, at 11:10 PM, Hung-Wei Chiu (邱宏瑋) <
>> hwc...@cs.nctu.edu.tw> wrote:
>> >
>> > Now, the kernel can decrease the CPU cycle usages for network I/O
>> processing because of RDMA enabled (right?).
>> > does it means host can provide more CPU for other processing, such as
>> Disk I/O ?
>>
>> 
>> ---
>> This email message is for the sole use of the intended recipient(s) and
>> may contain
>> confidential information.  Any unauthorized review, use, disclosure or
>> distribution
>> is prohibited.  If you are not the intended recipient, please contact the
>> sender by
>> reply email and destroy all copies of the original message.
>> 
>> ---
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread 邱宏瑋
Hi.

Basic
[global]

fsid = 0612cc7e-6239-456c-978b-b4df781fe831
mon initial members = ceph-1,ceph-2,ceph-3
mon host = 10.0.0.15,10.0.0.16,10.0.0.17
osd pool default size = 2
osd pool default pg num = 1024
osd pool default pgp num = 1024


RDMA
[global]

fsid = 0612cc7e-6239-456c-978b-b4df781fe831
mon initial members = ceph-1,ceph-2,ceph-3
mon host = 10.0.0.15,10.0.0.16,10.0.0.17
osd pool default size = 2
osd pool default pg num = 1024
osd pool default pgp num = 1024
ms_type=async+rdma
ms_async_rdma_device_name = mlx4_0


Thanks.

Best Regards,

Hung-Wei Chiu(邱宏瑋)
--
Computer Center, Department of Computer Science
National Chiao Tung University

2017-03-24 18:28 GMT+08:00 Haomai Wang :

> the content of  ceph.conf ?
>
> On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋) <
> hwc...@cs.nctu.edu.tw> wrote:
>
>> Hi Deepak.
>>
>> Thansk your reply,
>>
>> I try to use gperf to profile the ceph-osd with basic mode (without RDMA)
>> and you can see the result in the following link.
>> http://imgur.com/a/SJgEL
>>
>> In the gperf result, we can see the whole CPU usage can divided into
>> three significant part, (Network: 36%, FileStore: 17% PG related:29%).
>> Do you have any idea about this?
>>
>> Thanks.
>>
>>
>> Best Regards,
>>
>> Hung-Wei Chiu(邱宏瑋)
>> --
>> Computer Center, Department of Computer Science
>> National Chiao Tung University
>>
>> 2017-03-24 14:31 GMT+08:00 Deepak Naidu :
>>
>>> >>Now, the kernel can decrease the CPU cycle usages for network I/O
>>> processing because of RDMA enabled (right?).
>>>
>>> Yes, kernel should have comparatively free cycles when using RDMA over
>>> TCP
>>>
>>> >>does it means host can provide more CPU for other processing, such as
>>> Disk I/O ?
>>>
>>> This can be subjective, bcos let's say u have an IO process which takes
>>> 10 mins over TCP then when using RDMA it might be 3 mins as your CPU cycles
>>> are less used by RDMA but if ur application process which provides the RDMA
>>> functionality might have processing cycles as well, which might use CPU
>>> cycles as well, example client or fuse plugins on the host getting it ?
>>>
>>>
>>> --
>>> Deepak
>>>
>>> > On Mar 23, 2017, at 11:10 PM, Hung-Wei Chiu (邱宏瑋) <
>>> hwc...@cs.nctu.edu.tw> wrote:
>>> >
>>> > Now, the kernel can decrease the CPU cycle usages for network I/O
>>> processing because of RDMA enabled (right?).
>>> > does it means host can provide more CPU for other processing, such as
>>> Disk I/O ?
>>>
>>> 
>>> ---
>>> This email message is for the sole use of the intended recipient(s) and
>>> may contain
>>> confidential information.  Any unauthorized review, use, disclosure or
>>> distribution
>>> is prohibited.  If you are not the intended recipient, please contact
>>> the sender by
>>> reply email and destroy all copies of the original message.
>>> 
>>> ---
>>>
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Setting a different number of minimum replicas for reading and writing operations

2017-03-24 Thread Sergio A. de Carvalho Jr.
Ok, thanks for confirming.

On Thu, Mar 23, 2017 at 7:32 PM, Gregory Farnum  wrote:

> Nope. This is a theoretical possibility but would take a lot of code
> change that nobody has embarked upon yet.
> -Greg
> On Wed, Mar 22, 2017 at 2:16 PM Sergio A. de Carvalho Jr. <
> scarvalh...@gmail.com> wrote:
>
>> Hi all,
>>
>> Is it possible to create a pool where the minimum number of replicas for
>> the write operation to be confirmed is 2 but the minimum number of replicas
>> to allow the object to be read is 1?
>>
>> This would be useful when a pool consists of immutable objects, so we'd
>> have:
>> * size 3 (we always keep 3 replicas of all objects)
>> * min size for write 2 (write is complete once 2 replicas are created)
>> * min size for read 1 (read is allowed even if only 1 copy of the object
>> is available)
>>
>> Thanks,
>>
>> Sergio
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread Haomai Wang
OH.. you can refer to performance related threads on ceph/ceph-devel
maillist to get ssd-optimized ceph.conf. the default conf lack of good
support on ssd.

On Fri, Mar 24, 2017 at 6:33 AM, Hung-Wei Chiu (邱宏瑋) 
wrote:

> Hi.
>
> Basic
> [global]
>
> fsid = 0612cc7e-6239-456c-978b-b4df781fe831
> mon initial members = ceph-1,ceph-2,ceph-3
> mon host = 10.0.0.15,10.0.0.16,10.0.0.17
> osd pool default size = 2
> osd pool default pg num = 1024
> osd pool default pgp num = 1024
>
>
> RDMA
> [global]
>
> fsid = 0612cc7e-6239-456c-978b-b4df781fe831
> mon initial members = ceph-1,ceph-2,ceph-3
> mon host = 10.0.0.15,10.0.0.16,10.0.0.17
> osd pool default size = 2
> osd pool default pg num = 1024
> osd pool default pgp num = 1024
> ms_type=async+rdma
> ms_async_rdma_device_name = mlx4_0
>
>
> Thanks.
>
> Best Regards,
>
> Hung-Wei Chiu(邱宏瑋)
> --
> Computer Center, Department of Computer Science
> National Chiao Tung University
>
> 2017-03-24 18:28 GMT+08:00 Haomai Wang :
>
>> the content of  ceph.conf ?
>>
>> On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋) <
>> hwc...@cs.nctu.edu.tw> wrote:
>>
>>> Hi Deepak.
>>>
>>> Thansk your reply,
>>>
>>> I try to use gperf to profile the ceph-osd with basic mode (without
>>> RDMA) and you can see the result in the following link.
>>> http://imgur.com/a/SJgEL
>>>
>>> In the gperf result, we can see the whole CPU usage can divided into
>>> three significant part, (Network: 36%, FileStore: 17% PG related:29%).
>>> Do you have any idea about this?
>>>
>>> Thanks.
>>>
>>>
>>> Best Regards,
>>>
>>> Hung-Wei Chiu(邱宏瑋)
>>> --
>>> Computer Center, Department of Computer Science
>>> National Chiao Tung University
>>>
>>> 2017-03-24 14:31 GMT+08:00 Deepak Naidu :
>>>
 >>Now, the kernel can decrease the CPU cycle usages for network I/O
 processing because of RDMA enabled (right?).

 Yes, kernel should have comparatively free cycles when using RDMA over
 TCP

 >>does it means host can provide more CPU for other processing, such as
 Disk I/O ?

 This can be subjective, bcos let's say u have an IO process which takes
 10 mins over TCP then when using RDMA it might be 3 mins as your CPU cycles
 are less used by RDMA but if ur application process which provides the RDMA
 functionality might have processing cycles as well, which might use CPU
 cycles as well, example client or fuse plugins on the host getting it ?


 --
 Deepak

 > On Mar 23, 2017, at 11:10 PM, Hung-Wei Chiu (邱宏瑋) <
 hwc...@cs.nctu.edu.tw> wrote:
 >
 > Now, the kernel can decrease the CPU cycle usages for network I/O
 processing because of RDMA enabled (right?).
 > does it means host can provide more CPU for other processing, such as
 Disk I/O ?

 
 ---
 This email message is for the sole use of the intended recipient(s) and
 may contain
 confidential information.  Any unauthorized review, use, disclosure or
 distribution
 is prohibited.  If you are not the intended recipient, please contact
 the sender by
 reply email and destroy all copies of the original message.
 
 ---

>>>
>>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] The performance of ceph with RDMA

2017-03-24 Thread 邱宏瑋
Hi

Thanks your help!

I will try it and let u know results after I have finished it.

Thanks!!
Haomai Wang 於 2017年3月24日 週五,下午6:44寫道:

> OH.. you can refer to performance related threads on ceph/ceph-devel
> maillist to get ssd-optimized ceph.conf. the default conf lack of good
> support on ssd.
>
> On Fri, Mar 24, 2017 at 6:33 AM, Hung-Wei Chiu (邱宏瑋) <
> hwc...@cs.nctu.edu.tw> wrote:
>
> Hi.
>
> Basic
> [global]
>
> fsid = 0612cc7e-6239-456c-978b-b4df781fe831
> mon initial members = ceph-1,ceph-2,ceph-3
> mon host = 10.0.0.15,10.0.0.16,10.0.0.17
> osd pool default size = 2
> osd pool default pg num = 1024
> osd pool default pgp num = 1024
>
>
> RDMA
> [global]
>
> fsid = 0612cc7e-6239-456c-978b-b4df781fe831
> mon initial members = ceph-1,ceph-2,ceph-3
> mon host = 10.0.0.15,10.0.0.16,10.0.0.17
> osd pool default size = 2
> osd pool default pg num = 1024
> osd pool default pgp num = 1024
> ms_type=async+rdma
> ms_async_rdma_device_name = mlx4_0
>
>
> Thanks.
>
> Best Regards,
>
> Hung-Wei Chiu(邱宏瑋)
> --
> Computer Center, Department of Computer Science
> National Chiao Tung University
>
> 2017-03-24 18:28 GMT+08:00 Haomai Wang :
>
> the content of  ceph.conf ?
>
> On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋) <
> hwc...@cs.nctu.edu.tw> wrote:
>
> Hi Deepak.
>
> Thansk your reply,
>
> I try to use gperf to profile the ceph-osd with basic mode (without RDMA)
> and you can see the result in the following link.
> http://imgur.com/a/SJgEL
>
> In the gperf result, we can see the whole CPU usage can divided into three
> significant part, (Network: 36%, FileStore: 17% PG related:29%).
> Do you have any idea about this?
>
> Thanks.
>
>
> Best Regards,
>
> Hung-Wei Chiu(邱宏瑋)
> --
> Computer Center, Department of Computer Science
> National Chiao Tung University
>
> 2017-03-24 14:31 GMT+08:00 Deepak Naidu :
>
> >>Now, the kernel can decrease the CPU cycle usages for network I/O
> processing because of RDMA enabled (right?).
>
> Yes, kernel should have comparatively free cycles when using RDMA over TCP
>
> >>does it means host can provide more CPU for other processing, such as
> Disk I/O ?
>
> This can be subjective, bcos let's say u have an IO process which takes 10
> mins over TCP then when using RDMA it might be 3 mins as your CPU cycles
> are less used by RDMA but if ur application process which provides the RDMA
> functionality might have processing cycles as well, which might use CPU
> cycles as well, example client or fuse plugins on the host getting it ?
>
>
> --
> Deepak
>
> > On Mar 23, 2017, at 11:10 PM, Hung-Wei Chiu (邱宏瑋) 
> wrote:
> >
> > Now, the kernel can decrease the CPU cycle usages for network I/O
> processing because of RDMA enabled (right?).
> > does it means host can provide more CPU for other processing, such as
> Disk I/O ?
>
>
> ---
> This email message is for the sole use of the intended recipient(s) and
> may contain
> confidential information.  Any unauthorized review, use, disclosure or
> distribution
> is prohibited.  If you are not the intended recipient, please contact the
> sender by
> reply email and destroy all copies of the original message.
>
> ---
>
>
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Questions on rbd-mirror

2017-03-24 Thread Fulvio Galeazzi
Hallo, apologies for my (silly) questions, I did try to find some doc on 
rbd-mirror but was unable to, apart from a number of pages explaining 
how to install it.


My environment is CenOS7 and Ceph 10.2.5.

Can anyone help me understand a few minor things:

 - is there a cleaner way to configure the user which will be used for
   rbd-mirror, other than editing the ExecStart in file 
/usr/lib/systemd/system/ceph-rbd-mirror@.service ?

   For example some line in ceph.conf... looks like the username
   defaults to the cluster name, am I right?

 - is it possible to throttle mirroring? Sure, it's a crazy thing to do
   for "cinder" pools, but may make sense for slowly changing ones, like
   a "glance" pool.

 - is it possible to set per-pool default features? I read about
"rbd default features = ###"
   but this is a global setting. (Ok, I can still restrict pools to be
   mirrored with "ceph auth" for the user doing mirroring)


  Thanks!

Fulvio



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-rest-api's behavior

2017-03-24 Thread Brad Hubbard
On Fri, Mar 24, 2017 at 8:20 PM, Mika c  wrote:
> Hi Brad,
>  Thanks for your reply. The environment already created keyring file and
> put it in /etc/ceph but not working.

What was it called?

> I have to write config into ceph.conf like below.
>
> ---ceph.conf start---
> [client.symphony]
> log_file = /
> var/log/ceph/rest-api.log
>
> keyring = /etc/ceph/ceph.client.symphony
> public addr =
> 0.0.0.0
> :5
> 000
>
> restapi base url = /api/v0.1
> ---ceph.conf
> end
> ---
>
>
> Another question, have I must setting capabilities for this client like
> admin ?
> But I just want to take some information like health or df.
>
> If this client setting
> for a particular
> capabilities
> like..
> ---
> ---
>
> client.symphony
>key: AQBP8NRYGehDKRAAzyChAvAivydLqRBsHeTPjg==
>caps: [mon] allow r
>caps: [osd] allow r
> x
> ---
> ---
> Error list:
> Traceback (most recent call last):
>  File "/usr/bin/ceph-rest-api", line 59, in 
>rest,
>  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 495, in
> generate_a
> pp
>addr, port = api_setup(app, conf, cluster, clientname, clientid, args)
>  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 146, in
> api_setup
>target=('osd', int(osdid)))
>  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 84, in
> get_command
> _descriptions
>raise EnvironmentError(ret, err)
> EnvironmentError: [Errno -1] Can't get command descriptions:
>
>
>
>
> Best wishes,
> Mika
>
>
> 2017-03-24 16:21 GMT+08:00 Brad Hubbard :
>>
>> On Fri, Mar 24, 2017 at 4:06 PM, Mika c  wrote:
>> > Hi all,
>> >  Same question with CEPH 10.2.3 and 11.2.0.
>> >   Is this command only for client.admin ?
>> >
>> > client.symphony
>> >key: AQD0tdRYjhABEhAAaG49VhVXBTw0MxltAiuvgg==
>> >caps: [mon] allow *
>> >caps: [osd] allow *
>> >
>> > Traceback (most recent call last):
>> >  File "/usr/bin/ceph-rest-api", line 43, in 
>> >rest,
>> >  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 504, in
>> > generate_a
>> > pp
>> >addr, port = api_setup(app, conf, cluster, clientname, clientid,
>> > args)
>> >  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 106, in
>> > api_setup
>> >app.ceph_cluster.connect()
>> >  File "rados.pyx", line 811, in rados.Rados.connect
>> > (/tmp/buildd/ceph-11.2.0/obj-x
>> > 86_64-linux-gnu/src/pybind/rados/pyrex/rados.c:10178)
>> > rados.ObjectNotFound: error connecting to the cluster
>>
>> # strace -eopen /bin/ceph-rest-api |& grep keyring
>> open("/etc/ceph/ceph.client.restapi.keyring", O_RDONLY) = -1 ENOENT
>> (No such file or directory)
>> open("/etc/ceph/ceph.keyring", O_RDONLY) = -1 ENOENT (No such file or
>> directory)
>> open("/etc/ceph/keyring", O_RDONLY) = -1 ENOENT (No such file or
>> directory)
>> open("/etc/ceph/keyring.bin", O_RDONLY) = -1 ENOENT (No such file or
>> directory)
>>
>> # ceph auth get-or-create client.restapi mon 'allow *' mds 'allow *'
>> osd 'allow *' >/etc/ceph/ceph.client.restapi.keyring
>>
>> # /bin/ceph-rest-api
>>  * Running on http://0.0.0.0:5000/
>>
>> >
>> >
>> >
>> > Best wishes,
>> > Mika
>> >
>> >
>> > 2016-03-03 12:25 GMT+08:00 Shinobu Kinjo :
>> >>
>> >> Yes.
>> >>
>> >> On Wed, Jan 27, 2016 at 1:10 PM, Dan Mick  wrote:
>> >> > Is the client.test-admin key in the keyring read by ceph-rest-api?
>> >> >
>> >> > On 01/22/2016 04:05 PM, Shinobu Kinjo wrote:
>> >> >> Does anyone have any idea about that?
>> >> >>
>> >> >> Rgds,
>> >> >> Shinobu
>> >> >>
>> >> >> - Original Message -
>> >> >> From: "Shinobu Kinjo" 
>> >> >> To: "ceph-users" 
>> >> >> Sent: Friday, January 22, 2016 7:15:36 AM
>> >> >> Subject: ceph-rest-api's behavior
>> >> >>
>> >> >> Hello,
>> >> >>
>> >> >> "ceph-rest-api" works greatly with client.admin.
>> >> >> But with client.test-admin which I created just after building the
>> >> >> Ceph
>> >> >> cluster , it does not work.
>> >> >>
>> >> >>  ~$ ceph auth get-or-create client.test-admin mon 'allow *' mds
>> >> >> 'allow
>> >> >> *' osd 'allow *'
>> >> >>
>> >> >>  ~$ sudo ceph auth list
>> >> >>  installed auth entries:
>> >> >>...
>> >> >>  client.test-admin
>> >> >>   key: AQCOVaFWTYr2ORAAKwruANTLXqdHOchkVvRApg==
>> >> >>   caps: [mds] allow *
>> >> >>   caps: [mon] allow *
>> >> >>   caps: [osd] allow *
>> >> >>
>> >> >>  ~$ ceph-rest-api -n client.test-admin
>> >> >>  Traceback (most recent call last):
>> >> >>File "/bin/ceph-rest-api", line 59, in 
>> >> >>  rest,
>> >> >>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line
>> >> >> 504,
>> >> >> in generate_app
>> >> >>  addr, port = api_setup(app, conf, cluster, clientname,
>> >> >> clientid,
>> >> >> args)
>> >> >>File "/usr/lib/python2.7/site-packages/ceph_rest_api.py", line
>> >> >> 106,
>> >> >> in api_setup
>> >> >>  app.ceph_cluster.connect()
>> >> >>File "

[ceph-users] ceph pg dump - last_scrub last_deep_scrub

2017-03-24 Thread Laszlo Budai

Hello,

can someone tell me the meaning of the last_scrub and last_deep_scrub values 
from the ceph pg dump output?
I could not find it with google  nor in the documentation.

for example I can see here the last_scrub being 61092'4385, and the 
last_deep_scrub=61086'4379

pg_stat objects mip degrmispunf bytes   log disklog state   
state_stamp v   reportedup  up_primary  acting  
acting_primary  last_scrub  scrub_stamp last_deep_scrub deep_scrub_stamp
3.617   6   0   0   0   0   2215116830143014
active+clean2017-03-24 10:17:53.904393  61092'4385  61390:21953 
[9,58,35]   9   [9,58,35]   9   61092'4385  2017-03-24 
09:51:12.798365  61086'4379  2017-03-17 21:23:01.695528

Thank you,
Laszlo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to think a two different disk's technologies architecture

2017-03-24 Thread Alejandro Comisario
thanks for the recommendations so far.
any one with more experiences and thoughts?

best

On Mar 23, 2017 16:36, "Maxime Guyot"  wrote:

> Hi Alexandro,
>
> As I understand you are planning NVMe for Journal for SATA HDD and
> collocated journal for SATA SSD?
>
> Option 1:
> - 24x SATA SSDs per server, will have a bottleneck with the storage
> bus/controller.  Also, I would consider the network capacity 24xSSDs will
> deliver more performance than 24xHDD with journal, but you have the same
> network capacity on both types of nodes.
> - This option is a little easier to implement: just move nodes in
> different CRUSHmap root
> - Failure of a server (assuming size = 3) will impact all PGs
> Option 2:
> - You may have noisy neighbors effect between HDDs and SSDs, if HDDs are
> able to saturate your NICs or storage controller. So be mindful of this
> with the hardware design
> - To configure the CRUSHmap for this you need to split each server in 2, I
> usually use “server1-hdd” and “server1-ssd” and map the right OSD in the
> right bucket, so a little extra work here but you can easily fix a “crush
> location hook” script for it (see example http://www.root314.com/2017/
> 01/15/Ceph-storage-tiers/)
> - In case of a server failure recovery will be faster than option 1 and
> will impact less PGs
>
> Some general notes:
> - SSD pools perform better with higher frequency CPUs
> - the 1GB of RAM per TB is a little outdated, the current consensus for
> HDD OSDs is around 2GB/OSD (see https://www.redhat.com/cms/
> managed-files/st-rhcs-config-guide-technology-detail-
> inc0387897-201604-en.pdf)
> - Network wise, if the SSD OSDs are rated for 500MB/s and use collocated
> journal you could generate up to 250MB/s of traffic per SSD OSD (24Gbps for
> 12x or 48Gbps for 24x) therefore I would consider doing 4x10G and
> consolidate both client and cluster network on that
>
> Cheers,
> Maxime
>
> On 23/03/17 18:55, "ceph-users on behalf of Alejandro Comisario" <
> ceph-users-boun...@lists.ceph.com on behalf of alejan...@nubeliu.com>
> wrote:
>
> Hi everyone!
> I have to install a ceph cluster (6 nodes) with two "flavors" of
> disks, 3 servers with SSD and 3 servers with SATA.
>
> Y will purchase 24 disks servers (the ones with sata with NVE SSD for
> the SATA journal)
> Processors will be 2 x E5-2620v4 with HT, and ram will be 20GB for the
> OS, and 1.3GB of ram per storage TB.
>
> The servers will have 2 x 10Gb bonding for public network and 2 x 10Gb
> for cluster network.
> My doubts resides, ar want to ask the community about experiences and
> pains and gains of choosing between.
>
> Option 1
> 3 x servers just for SSD
> 3 x servers jsut for SATA
>
> Option 2
> 6 x servers with 12 SSD and 12 SATA each
>
> Regarding crushmap configuration and rules everything is clear to make
> sure that two pools (poolSSD and poolSATA) uses the right disks.
>
> But, what about performance, maintenance, architecture scalability,
> etc ?
>
> thank you very much !
>
> --
> Alejandrito
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] memory usage ceph jewel OSDs

2017-03-24 Thread Manuel Lausch
Hello,

in the last days I try to figure out why my OSDs needs a huge amount of
RAM. (1,2 - 4 GB). With this my System memory is on limit. At
beginning I thougt it is because of huge amount of backfilling (some
disks died). But now since a few days all is good but the memory keeps
at its level. Restarting of the OSDs did nothing on this behaviour. 

I running Ceph Jewel (10.2.6) on RedHat7. The cluster has 8 Hosts with
36 4TB OSDs each and 4 Hosts with 15 4 TB OSDs

I tried to profile the used memory like documented here: 
http://docs.ceph.com/docs/jewel/rados/troubleshooting/memory-profiling/

But the output of this commands didn't help me. But I am confused about
the used memory.

from ceph tell osd.98 heap dump I get the following output:
# ceph tell osd.98 heap dump
osd.98 dumping heap profile now.

MALLOC: 1290458456 ( 1230.7 MiB) Bytes in use by application
MALLOC: +0 (0.0 MiB) Bytes in page heap freelist
MALLOC: + 63583000 (   60.6 MiB) Bytes in central cache freelist
MALLOC: +  5896704 (5.6 MiB) Bytes in transfer cache freelist
MALLOC: +102784400 (   98.0 MiB) Bytes in thread cache freelists
MALLOC: + 11350176 (   10.8 MiB) Bytes in malloc metadata
MALLOC:   
MALLOC: =   1474072736 ( 1405.8 MiB) Actual memory used (physical +
swap) MALLOC: +129064960 (  123.1 MiB) Bytes released to OS (aka
unmapped) MALLOC:   
MALLOC: =   1603137696 ( 1528.9 MiB) Virtual address space used
MALLOC:
MALLOC:  88305  Spans in use
MALLOC:   1627  Thread heaps in use
MALLOC:   8192  Tcmalloc page size

Call ReleaseFreeMemory() to release freelist memory to the OS (via
madvise()). Bytes released to the OS take up virtual address space but
no physical memory.


I would say the application needs 1230.7 MB of RAM. But if I analyse
the corresponding dump whit pprof The are only a few Megabytes
mentioned. Follwing the first few lines of pprof:

# pprof --text /usr/bin/ceph-osd osd.98.profile.0002.heap 
Using local file /usr/bin/ceph-osd.
Using local file osd.98.profile.0002.heap.
Total: 8.9 MB
 3.3  36.7%  36.7%  3.3  36.7% ceph::log::Log::create_entry
 2.3  25.5%  62.2%  2.3  25.5% ceph::buffer::list::append@a1f280
 1.1  12.1%  74.3%  2.0  23.1% SimpleMessenger::add_accept_pipe
 0.9  10.4%  84.7%  0.9  10.5% Pipe::Pipe
 0.2   2.8%  87.5%  0.2   2.8% std::map::operator[]
 0.2   2.2%  89.7%  0.2   2.2% std::vector::_M_default_append
 0.2   1.8%  91.5%  0.2   1.8% std::_Rb_tree::_M_copy
 0.1   0.8%  92.4%  0.1   0.8% ceph::buffer::create_aligned
 0.1   0.8%  93.2%  0.1   0.8% std::string::_Rep::_S_create


Is this normal? Do I do something wrong? Is there a Bug? Why need my
OSDs so much RAM?

Thanks for your help

Regards,
Manuel

-- 
Manuel Lausch

Systemadministrator
Cloud Services

1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 |
76135 Karlsruhe | Germany Phone: +49 721 91374-1847
E-Mail: manuel.lau...@1und1.de | Web: www.1und1.de

Amtsgericht Montabaur, HRB 5452

Geschäftsführer: Frank Einhellinger, Thomas Ludwig, Jan Oetjen


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte
Informationen enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat
sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie
bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem
bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu speichern,
weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu
verwenden.

This e-mail may contain confidential and/or privileged information. If
you are not the intended recipient of this e-mail, you are hereby
notified that saving, distribution or use of the content of this e-mail
in any way is prohibited. If you have received this e-mail in error,
please notify the sender and delete the e-mail.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Object Map Costs (Was: Snapshot Costs (Was: Re: Pool Sizes))

2017-03-24 Thread Kjetil Jørgensen
Hi,

Depending on how you plan to use the omap - you might also want to avoid a
large number of key/value pairs as well. CephFS got it's directory fragment
size capped due to large omaps being painful to deal with (see:
http://tracker.ceph.com/issues/16164 and
http://tracker.ceph.com/issues/16177).

Cheers,
KJ

On Thu, Mar 9, 2017 at 3:02 PM, Max Yehorov  wrote:

> re: python library
>
> you can do some mon calls using this:
>
> ##--
> from ceph_argparse import json_command as json_command
>
> rados_inst = rados.Rados()
> cluster_handle = rados_inst.connect()
>
> cmd = {'prefix': 'pg dump', 'dumpcontents': ['summary', ], 'format':
> 'json'}
> retcode, jsonret, errstr = json_command(cluster_handle, argdict=cmd)
> ##--
>
>
> MON commands
> https://github.com/ceph/ceph/blob/a68106934c5ed28d0195d6104bce59
> 81aca9aa9d/src/mon/MonCommands.h
>
> On Wed, Mar 8, 2017 at 2:01 PM, Kent Borg  wrote:
> > I'm slowly working my way through Ceph's features...
> >
> > I recently happened upon object maps. (I had heard of LevelDB being in
> there
> > but never saw how to use it: That's because I have been using Python! And
> > the Python library is missing lots of features! Grrr.)
> >
> > How fast are those omap calls?
> >
> > Which is faster: a single LevelDB query yielding a few bytes vs. a single
> > RADOS object read of that many bytes at a specific offset?
> >
> > How about iterating through a whole set of values vs. reading a RADOS
> object
> > holding the same amount of data?
> >
> > Thanks,
> >
> > -kb, the Kent who is guessing LevelDB will be slower in both cases,
> because
> > he really isn't using the key/value aspect of LevelDB but is still paying
> > for it.
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Kjetil Joergensen 
SRE, Medallia Inc
Phone: +1 (650) 739-6580
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Modification Time of RBD Images

2017-03-24 Thread Kjetil Jørgensen
Hi,

YMMV, riddled with assumptions (image is image-format=2, has one ext4
filesystem, no partition table, ext4 superblock starts at 0x400 and
probably a whole boatload of other stuff, I don't know when ext4
updates s_wtime
of it's superblock, nor if it's actually the superblock last write or last
write to filesystem, etc.).

rados -p rbd get $(rbd info $SOME_IMAGE_NAME | awk '/block_name_prefix/ {
print $2 }'). - | dd if=/dev/stdin of=/dev/stdout skip=1072
bs=1 count=4 status=none | perl -lane 'print scalar localtime unpack "I*",
 $_;'

Cheers,
KJ

On Fri, Mar 24, 2017 at 12:27 AM, Dongsheng Yang <
dongsheng.y...@easystack.cn> wrote:

> Hi jason,
>
> do you think this is a good feature for rbd?
> maybe we can implement a "rbd stat" command
> to show atime, mtime and ctime of an image.
>
> Yang
>
>
> On 03/23/2017 08:36 PM, Christoph Adomeit wrote:
>
>> Hi,
>>
>> no i did not enable the journalling feature since we do not use mirroring.
>>
>>
>> On Thu, Mar 23, 2017 at 08:10:05PM +0800, Dongsheng Yang wrote:
>>
>>> Did you enable the journaling feature?
>>>
>>> On 03/23/2017 07:44 PM, Christoph Adomeit wrote:
>>>
 Hi Yang,

 I mean "any write" to this image.

 I am sure we have a lot of not-used-anymore rbd images in our pool and
 I am trying to identify them.

 The mtime would be a good hint to show which images might be unused.

 Christoph

 On Thu, Mar 23, 2017 at 07:32:49PM +0800, Dongsheng Yang wrote:

> Hi Christoph,
>
> On 03/23/2017 07:16 PM, Christoph Adomeit wrote:
>
>> Hello List,
>>
>> i am wondering if there is meanwhile an easy method in ceph to find
>> more information about rbd-images.
>>
>> For example I am interested in the modification time of an rbd image.
>>
> Do you mean some metadata changing? such as resize?
>
> Or any write to this image?
>
> Thanx
> Yang
>
>> I found some posts from 2015 that say we have to go over all the
>> objects of an rbd image and find the newest mtime put this is not a
>> preferred solution for me. It takes to much time and too many system
>> resources.
>>
>> Any Ideas ?
>>
>> Thanks
>>Christoph
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Kjetil Joergensen 
SRE, Medallia Inc
Phone: +1 (650) 739-6580
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] default pools gone. problem?

2017-03-24 Thread mj

Hi,

On the docs on ppols 
http://docs.ceph.com/docs/cuttlefish/rados/operations/pools/ it says:


The default pools are:

*data
*metadata
*rbd

My ceph install has only ONE pool called "ceph-storage", the others are 
gone. (probably deleted?)


Is not having those default pools a problem? Do I need to recreate them, 
or can they safely be deleted?


I'm on hammer, but intending to upgrade to jewel, and trying to identify 
potential issues, therefore this question.


MJ
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] default pools gone. problem?

2017-03-24 Thread Bob R
You can operate without the default pools without issue.

On Fri, Mar 24, 2017 at 1:23 PM, mj  wrote:

> Hi,
>
> On the docs on ppols http://docs.ceph.com/docs/cutt
> lefish/rados/operations/pools/ it says:
>
> The default pools are:
>
> *data
> *metadata
> *rbd
>
> My ceph install has only ONE pool called "ceph-storage", the others are
> gone. (probably deleted?)
>
> Is not having those default pools a problem? Do I need to recreate them,
> or can they safely be deleted?
>
> I'm on hammer, but intending to upgrade to jewel, and trying to identify
> potential issues, therefore this question.
>
> MJ
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] default pools gone. problem?

2017-03-24 Thread mj



On 03/24/2017 10:13 PM, Bob R wrote:

You can operate without the default pools without issue.


Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephFS mounted on client shows space used -- when there is nothing used on the FS

2017-03-24 Thread Deepak Naidu
I have cephFS cluster. Below is the df from a client node.

Question is why does the df command when mounted using ceph-fuse or ceph-kernel 
mount shows "used space" when there is nothing used(empty -- no files or 
directories)

[root@storage ~]# df -h
Filesystem  Size  Used Avail Use% 
Mounted on
/dev/mapper/centos-root  45G  1.4G   44G   4% /
devtmpfs28G 0   28G   0% /dev
tmpfs28G 0   28G   0% 
/dev/shm
tmpfs28G   17M   28G   1% 
/run
tmpfs28G 0   28G   0% 
/sys/fs/cgroup
/dev/xvda1497M  168M  329M  34% /boot
/dev/mapper/centos-home   22G   34M   22G   1% /home
tmpfs5.5G 0  5.5G   0% 
/run/user/0
ceph-fuse   4.7T  1.5G  4.7T   1% 
/mnt/cephfs
[root@storage ~]#


[root@storage ~]# ls -larth /mnt/cephfs/
total 512
drwxr-xr-x. 3 root root 19 Mar 17 12:36 ..
drwxr-xr-x  1 root root  0 Mar 23 22:20 .
[root@storage ~]#


[root@storage ~]# du -shc /mnt/cephfs/
512 /mnt/cephfs/
512 total
[root@storage ~]#

--
Deepak

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph pg dump - last_scrub last_deep_scrub

2017-03-24 Thread Brad Hubbard
On Fri, Mar 24, 2017 at 10:12 PM, Laszlo Budai  wrote:
> Hello,
>
> can someone tell me the meaning of the last_scrub and last_deep_scrub values
> from the ceph pg dump output?
> I could not find it with google  nor in the documentation.
>
> for example I can see here the last_scrub being 61092'4385, and the
> last_deep_scrub=61086'4379

I have no time so will be brief. It is version "'" epoch IIUC. So the
epoch is 4379 (note that this is without looking.

>
> pg_stat objects mip degrmispunf bytes   log disklog
> state   state_stamp v   reportedup  up_primary
> acting  acting_primary  last_scrub  scrub_stamp last_deep_scrub
> deep_scrub_stamp
> 3.617   6   0   0   0   0   2215116830143014
> active+clean2017-03-24 10:17:53.904393  61092'4385  61390:21953
> [9,58,35]   9   [9,58,35]   9   61092'4385  2017-03-24
> 09:51:12.798365  61086'4379  2017-03-17 21:23:01.695528


>
> Thank you,
> Laszlo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Cheers,
Brad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to think a two different disk's technologies architecture

2017-03-24 Thread Alex Gorbachev
On Fri, Mar 24, 2017 at 10:04 AM Alejandro Comisario 
wrote:

> thanks for the recommendations so far.
> any one with more experiences and thoughts?
>
> best
>

On the network side, 25, 40, 56 and maybe soon 100 Gbps can now be fairly
affordable, and simplify the architecture for the high throughput nodes.


> On Mar 23, 2017 16:36, "Maxime Guyot"  wrote:
>
> Hi Alexandro,
>
> As I understand you are planning NVMe for Journal for SATA HDD and
> collocated journal for SATA SSD?
>
> Option 1:
> - 24x SATA SSDs per server, will have a bottleneck with the storage
> bus/controller.  Also, I would consider the network capacity 24xSSDs will
> deliver more performance than 24xHDD with journal, but you have the same
> network capacity on both types of nodes.
> - This option is a little easier to implement: just move nodes in
> different CRUSHmap root
> - Failure of a server (assuming size = 3) will impact all PGs
> Option 2:
> - You may have noisy neighbors effect between HDDs and SSDs, if HDDs are
> able to saturate your NICs or storage controller. So be mindful of this
> with the hardware design
> - To configure the CRUSHmap for this you need to split each server in 2, I
> usually use “server1-hdd” and “server1-ssd” and map the right OSD in the
> right bucket, so a little extra work here but you can easily fix a “crush
> location hook” script for it (see example
> http://www.root314.com/2017/01/15/Ceph-storage-tiers/)
> - In case of a server failure recovery will be faster than option 1 and
> will impact less PGs
>
> Some general notes:
> - SSD pools perform better with higher frequency CPUs
> - the 1GB of RAM per TB is a little outdated, the current consensus for
> HDD OSDs is around 2GB/OSD (see
> https://www.redhat.com/cms/managed-files/st-rhcs-config-guide-technology-detail-inc0387897-201604-en.pdf
> )
> - Network wise, if the SSD OSDs are rated for 500MB/s and use collocated
> journal you could generate up to 250MB/s of traffic per SSD OSD (24Gbps for
> 12x or 48Gbps for 24x) therefore I would consider doing 4x10G and
> consolidate both client and cluster network on that
>
> Cheers,
> Maxime
>
> On 23/03/17 18:55, "ceph-users on behalf of Alejandro Comisario" <
> ceph-users-boun...@lists.ceph.com on behalf of alejan...@nubeliu.com>
> wrote:
>
> Hi everyone!
> I have to install a ceph cluster (6 nodes) with two "flavors" of
> disks, 3 servers with SSD and 3 servers with SATA.
>
> Y will purchase 24 disks servers (the ones with sata with NVE SSD for
> the SATA journal)
> Processors will be 2 x E5-2620v4 with HT, and ram will be 20GB for the
> OS, and 1.3GB of ram per storage TB.
>
> The servers will have 2 x 10Gb bonding for public network and 2 x 10Gb
> for cluster network.
> My doubts resides, ar want to ask the community about experiences and
> pains and gains of choosing between.
>
> Option 1
> 3 x servers just for SSD
> 3 x servers jsut for SATA
>
> Option 2
> 6 x servers with 12 SSD and 12 SATA each
>
> Regarding crushmap configuration and rules everything is clear to make
> sure that two pools (poolSSD and poolSATA) uses the right disks.
>
> But, what about performance, maintenance, architecture scalability,
> etc ?
>
> thank you very much !
>
> --
> Alejandrito
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-- 
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Preconditioning an RBD image

2017-03-24 Thread Alex Gorbachev
On Wed, Mar 22, 2017 at 6:05 AM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:

> Does iostat (eg.  iostat -xmy 1 /dev/sd[a-z]) show high util% or await
> during these problems?
>

It does, from watching atop.


>
> Ceph filestore requires lots of metadata writing (directory splitting for
> example), xattrs, leveldb, etc. which are small sync writes that HDDs are
> bad at (100-300 iops), and SSDs are good at (cheapo would be 6k iops, and
> not so crazy DC/NVMe would be 20-200k iops and more). So in theory, these
> things are mitigated by using an SSD, like bcache on your osd device. You
> could also try something like that, at least to test.
>

That explains our previous performance gains with Areca HBAs in NVRAM /
supercap backed write cache mode.  We went to SSD journal design to be more
resilient to sustained write workloads, but this created more latency on
small/random write IO.


>
> I have tested with bcache in writeback mode and found hugely obvious
> differences seen by iostat, for example here's my before and after (heavier
> load due to converting week 49-50 or so, and the highest spikes being the
> scrub infinite loop bug in 10.2.3):
>
>
> http://www.brockmann-consult.de/ganglia/graph.php?cs=10%2F25%2F2016+10%3A27&ce=03%2F09%2F2017+17%3A26&z=xlarge&hreg
> []=ceph.*&mreg[]=sd[c-z]_await&glegend=show&aggregate=1&x=100
>
> But when you share a cache device, you get a single point of failure (and
> bcache, like all software, can be assumed to have bugs too). And I
> recommend vanilla kernel 4.9 or later which has many bcache fixes, or
> Ubuntu's 4.4 kernel which has the specific fixes I checked for.
>

Yep, I am scared of that and therefore would prefer either a vendor based
solid state design (e.g. areca), all SSD OSDs whenever these can be
affordable, or start experimenting with cache pools. Does not seem like
SSDs are getting any cheaper, just new technologies like 3DXP showing up.


>
> On 03/21/17 23:22, Alex Gorbachev wrote:
>
> I wanted to share the recent experience, in which a few RBD volumes,
> formatted as XFS and exported via Ubuntu NFS-kernel-server performed
> poorly, even generated an "out of space" warnings on a nearly empty
> filesystem.  I tried a variety of hacks and fixes to no effect, until
> things started magically working just after some dd write testing.
>
> The only explanation I can come up with is that preconditioning, or
> thickening, the images with this benchmarking is what caused the
> improvement.
>
> Ceph is Hammer 0.94.7 running on Ubuntu 14.04, kernel 4.10 on OSD nodes
> and 4.4 on NFS nodes.
>
> Regards,
> Alex
> Storcium
> --
> --
> Alex Gorbachev
> Storcium
>
>
> ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
>
> 
> Peter Maloney
> Brockmann Consult
> Max-Planck-Str. 2
> 21502 Geesthacht
> Germany
> Tel: +49 4152 889 300
> Fax: +49 4152 889 333
> E-mail: peter.malo...@brockmann-consult.de
> Internet: http://www.brockmann-consult.de
> 
>
> --
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com