Hi jason,
do you think this is a good feature for rbd?
maybe we can implement a "rbd stat" command
to show atime, mtime and ctime of an image.
Yang
On 03/23/2017 08:36 PM, Christoph Adomeit wrote:
Hi,
no i did not enable the journalling feature since we do not use mirroring.
On Thu, Mar
Hi Tom and thanks a lot for the feedback.
Indeed my root filesystem is on an LVM volume and I am currently
running CentOS 7.3.1611 with kernel 3.10.0-514.10.2.el7.x86_64 and the
ceph version is 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)
The 60-ceph-by-parttypeuuid.rules on the system i
On 03/23/2017 06:10 PM, nokia ceph wrote:
Hello Piotr,
I didn't understand, could you please elaborate about this procedure as
mentioned in the last update. It would be really helpful if you share any
useful link/doc to understand what you actually meant. Yea correct, normally
we do this proced
On Fri, Mar 24, 2017 at 4:06 PM, Mika c wrote:
> Hi all,
> Same question with CEPH 10.2.3 and 11.2.0.
> Is this command only for client.admin ?
>
> client.symphony
>key: AQD0tdRYjhABEhAAaG49VhVXBTw0MxltAiuvgg==
>caps: [mon] allow *
>caps: [osd] allow *
>
> Traceb
Hi Deepak.
Thansk your reply,
I try to use gperf to profile the ceph-osd with basic mode (without RDMA)
and you can see the result in the following link.
http://imgur.com/a/SJgEL
In the gperf result, we can see the whole CPU usage can divided into three
significant part, (Network: 36%, FileStore
Brad, cool now we are on the same track :)
So whatever change we made after this location src/* as it mapped to
respective rpm correct?
For eg:-
src/osd/* -- ceph-osd
src/common - ceph-common
src/mon - ceph-mon
src/mgr - ceph-mgr
Since we are using bluestore with kraken, I though to disable t
Hi all,
Something that I am curious about:
Suppose I have a three-server cluster, all with identical OSDs
configuration, and also a replication factor of three.
That would mean (I guess) that all 3 servers have a copy of everything
in the ceph pool.
My question: given that every machine ha
Piotr, thanks for the info.
Yea this method is time saving, but we are not started testing with build
from source method. We will consider this for our next part of testing :)
On Fri, Mar 24, 2017 at 1:17 PM, Piotr Dałek
wrote:
> On 03/23/2017 06:10 PM, nokia ceph wrote:
>
>> Hello Piotr,
>>
>>
Hi,
no ceph read from the primary PG - so your reads are app. 33% local.
And why? better distibution of read-access.
Udo
Am 2017-03-24 09:49, schrieb mj:
Hi all,
Something that I am curious about:
Suppose I have a three-server cluster, all with identical OSDs
configuration, and also a replic
On 03/24/2017 10:33 AM, ulem...@polarzone.de wrote:
And why? better distibution of read-access.
Udo
Ah yes.
On the other hand... In the case of specific often-requested data in
your pool, the primary PG will have to handle all those requests, and in
that case using a local copy would have
Hi Brad,
Thanks for your reply. The environment already created keyring file
and put it in /etc/ceph but not working.
I have to write config into ceph.conf like below.
---ceph.conf start---
[client.symphony]
log_file = /
var/log/ceph/rest-api.log
keyring = /etc/ceph/ceph.client.symp
the content of ceph.conf ?
On Fri, Mar 24, 2017 at 4:32 AM, Hung-Wei Chiu (邱宏瑋)
wrote:
> Hi Deepak.
>
> Thansk your reply,
>
> I try to use gperf to profile the ceph-osd with basic mode (without RDMA)
> and you can see the result in the following link.
> http://imgur.com/a/SJgEL
>
> In the gper
Hi.
Basic
[global]
fsid = 0612cc7e-6239-456c-978b-b4df781fe831
mon initial members = ceph-1,ceph-2,ceph-3
mon host = 10.0.0.15,10.0.0.16,10.0.0.17
osd pool default size = 2
osd pool default pg num = 1024
osd pool default pgp num = 1024
RDMA
[global]
fsid = 0612cc7e-6239-456c-978b-b4df781fe831
Ok, thanks for confirming.
On Thu, Mar 23, 2017 at 7:32 PM, Gregory Farnum wrote:
> Nope. This is a theoretical possibility but would take a lot of code
> change that nobody has embarked upon yet.
> -Greg
> On Wed, Mar 22, 2017 at 2:16 PM Sergio A. de Carvalho Jr. <
> scarvalh...@gmail.com> wrot
OH.. you can refer to performance related threads on ceph/ceph-devel
maillist to get ssd-optimized ceph.conf. the default conf lack of good
support on ssd.
On Fri, Mar 24, 2017 at 6:33 AM, Hung-Wei Chiu (邱宏瑋)
wrote:
> Hi.
>
> Basic
> [global]
>
> fsid = 0612cc7e-6239-456c-978b-b4df781fe831
> mon
Hi
Thanks your help!
I will try it and let u know results after I have finished it.
Thanks!!
Haomai Wang 於 2017年3月24日 週五,下午6:44寫道:
> OH.. you can refer to performance related threads on ceph/ceph-devel
> maillist to get ssd-optimized ceph.conf. the default conf lack of good
> support on ssd.
>
Hallo, apologies for my (silly) questions, I did try to find some doc on
rbd-mirror but was unable to, apart from a number of pages explaining
how to install it.
My environment is CenOS7 and Ceph 10.2.5.
Can anyone help me understand a few minor things:
- is there a cleaner way to configure
On Fri, Mar 24, 2017 at 8:20 PM, Mika c wrote:
> Hi Brad,
> Thanks for your reply. The environment already created keyring file and
> put it in /etc/ceph but not working.
What was it called?
> I have to write config into ceph.conf like below.
>
> ---ceph.conf start---
> [client.symp
Hello,
can someone tell me the meaning of the last_scrub and last_deep_scrub values
from the ceph pg dump output?
I could not find it with google nor in the documentation.
for example I can see here the last_scrub being 61092'4385, and the
last_deep_scrub=61086'4379
pg_stat objects mip
thanks for the recommendations so far.
any one with more experiences and thoughts?
best
On Mar 23, 2017 16:36, "Maxime Guyot" wrote:
> Hi Alexandro,
>
> As I understand you are planning NVMe for Journal for SATA HDD and
> collocated journal for SATA SSD?
>
> Option 1:
> - 24x SATA SSDs per serv
Hello,
in the last days I try to figure out why my OSDs needs a huge amount of
RAM. (1,2 - 4 GB). With this my System memory is on limit. At
beginning I thougt it is because of huge amount of backfilling (some
disks died). But now since a few days all is good but the memory keeps
at its level. Res
Hi,
Depending on how you plan to use the omap - you might also want to avoid a
large number of key/value pairs as well. CephFS got it's directory fragment
size capped due to large omaps being painful to deal with (see:
http://tracker.ceph.com/issues/16164 and
http://tracker.ceph.com/issues/16177).
Hi,
YMMV, riddled with assumptions (image is image-format=2, has one ext4
filesystem, no partition table, ext4 superblock starts at 0x400 and
probably a whole boatload of other stuff, I don't know when ext4
updates s_wtime
of it's superblock, nor if it's actually the superblock last write or last
Hi,
On the docs on ppols
http://docs.ceph.com/docs/cuttlefish/rados/operations/pools/ it says:
The default pools are:
*data
*metadata
*rbd
My ceph install has only ONE pool called "ceph-storage", the others are
gone. (probably deleted?)
Is not having those default pools a prob
You can operate without the default pools without issue.
On Fri, Mar 24, 2017 at 1:23 PM, mj wrote:
> Hi,
>
> On the docs on ppols http://docs.ceph.com/docs/cutt
> lefish/rados/operations/pools/ it says:
>
> The default pools are:
>
> *data
> *metadata
> *rbd
>
> My ceph install has
On 03/24/2017 10:13 PM, Bob R wrote:
You can operate without the default pools without issue.
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have cephFS cluster. Below is the df from a client node.
Question is why does the df command when mounted using ceph-fuse or ceph-kernel
mount shows "used space" when there is nothing used(empty -- no files or
directories)
[root@storage ~]# df -h
Filesystem
On Fri, Mar 24, 2017 at 10:12 PM, Laszlo Budai wrote:
> Hello,
>
> can someone tell me the meaning of the last_scrub and last_deep_scrub values
> from the ceph pg dump output?
> I could not find it with google nor in the documentation.
>
> for example I can see here the last_scrub being 61092
On Fri, Mar 24, 2017 at 10:04 AM Alejandro Comisario
wrote:
> thanks for the recommendations so far.
> any one with more experiences and thoughts?
>
> best
>
On the network side, 25, 40, 56 and maybe soon 100 Gbps can now be fairly
affordable, and simplify the architecture for the high throughpu
On Wed, Mar 22, 2017 at 6:05 AM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> Does iostat (eg. iostat -xmy 1 /dev/sd[a-z]) show high util% or await
> during these problems?
>
It does, from watching atop.
>
> Ceph filestore requires lots of metadata writing (directory splitting f
30 matches
Mail list logo