I get confused there because on the
documentation:http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
"If there is more, provisioning a DB device makes more sense. The BlueStore
journal will always be placed on the fastest device available, so using a DB
device will provi
Hi,
Yes, im using bluestore.
there is no I/O on the ceph cluster. it's totally idle.
All the CPU usage are by OSD who don't have any workload on it.
Thanks!
On Thu, Nov 9, 2017 at 9:37 AM, Vy Nguyen Tan
wrote:
> Hello,
>
> I think it not normal behavior in Luminous. I'm testing 3 nodes, each n
2017-11-08 22:05 GMT+01:00 Marc Roos :
>
> Can anyone advice on a erasure pool config to store
>
> - files between 500MB and 8GB, total 8TB
> - just for archiving, not much reading (few files a week)
> - hdd pool
> - now 3 node cluster (4th coming)
> - would like to save on storage space
>
> I was
Hi Greg,
Thanks! This seems to have worked for at least 1 of 2 inconsistent pgs:
The inconsistency disappeared after a new scrub. Still waiting for the
result of the second pg. I tried to force deep-scrub with `ceph pg
deep-scrub ` yesterday, but today the last deep scrub is still from
a week
On Fri, Nov 03, 2017 at 12:09:03PM +0100, Alwin Antreich wrote:
> Hi,
>
> I am confused by the %USED calculation in the output 'ceph df' in luminous.
> In the example below the pools use 2.92% "%USED" but with my calculation,
> taken from the source code it gives me a 8.28%. On a hammer cluster m
> On Nov 9, 2017, at 5:25 AM, Sam Huracan wrote:
>
> root@radosgw system]# ceph --admin-daemon
> /var/run/ceph/ceph-client.rgw.radosgw.asok config show | grep log_file
> "log_file": "/var/log/ceph/ceph-client.rgw.radosgw.log”,
The .asok filename resembles what should be used in your config.
On 2017-11-08T21:41:41, Sage Weil wrote:
> Who is running nfs-ganesha's FSAL to export CephFS? What has your
> experience been?
>
> (We are working on building proper testing and support for this into
> Mimic, but the ganesha FSAL has been around for years.)
We use it currently, and it works
How/where can I see how eg. 'profile rbd' is defined?
As in
[client.rbd.client1]
key = xxx==
caps mon = "profile rbd"
caps osd = "profile rbd pool=rbd"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.cep
You're correct, if you were going to put the WAL and DB on the same device you
should just make one partition and allocate the DB to it, the WAL will
automatically be stored with the DB. It only makes sense to specify them
separately if they are going to go on different devices, and that itself
On Thu, Nov 9, 2017 at 10:12 AM, Marc Roos wrote:
>
> How/where can I see how eg. 'profile rbd' is defined?
>
> As in
> [client.rbd.client1]
> key = xxx==
> caps mon = "profile rbd"
> caps osd = "profile rbd pool=rbd"
The profiles are defined internally and are subject to
Hi!I would like to export my cephfs using Ganesha NFS to export a NFSv3 o NFSv4
I am a little lost while doing it. I managed to make it working with NFSv4 but
I can't make it work with NFSv3 as the server refuses the connection.
Has anyone managed to do it?
Hi Sage,
As Lars mentioned, at SUSE, we use ganesha 2.5.2/luminous. We did a preliminary
performance comparison of cephfs client
and nfs-ganesha client. I have attached the results. The results are aggregate
bandwidth over 10 clients.
1. Test Setup:
We use fio to read/write to a single 5GB file
Hi, Nick
Thank you for the answer!
It's still unclear for me, do those options have no effect at all?
Or disk thread is used for some other operations?
09.11.2017, 04:18, "Nick Fisk" :
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>
What would be the correct way to convert the xml file rbdmapped images
to librbd?
I had this:
And for librbd this:
But this will give me a qemu
-sorry, wrong address
Hi Richard,
I have seen a few lectures about bluestore, and they made it abundantly
clear, that bluestore is superior to filestore in that manner, that it
writes data to the disc only once (this way they could achieve a 2x-3x
speed increase).
So this is true if there
One small point: It's a bit easier to observe distinct WAL and DB
behavior when they are on separate partitions. I often do this for
benchmarking and testing though I don't know that it would be enough of
a benefit to do it in production.
Mark
On 11/09/2017 04:16 AM, Richard Hesketh wrote:
The email was not delivered to ceph-de...@vger.kernel.org. So, re-sending it.
Few more things regarding the hardware and clients used in our benchmarking
setup:
- The cephfs benchmark were done using kernel cephfs client.
- NFS-Ganesha was mounted using nfs version 4.
- Single nfs-ganesha serv
They are currently defined to the following (translated to cap syntax):
mon: 'allow service mon r, allow service osd r, allow service pg r,
allow command "osd blacklist" with blacklistop=add addr regex
"^[^/]+/[0-9]+$"'
osd: 'allow class-read object_prefix rbd_children, allow class-read
object_pre
Hi,
Can someone please tell me what the correct procedure is to upgrade a CEPH
journal?
I'm running ceph: 12.2.1 on Proxmox 5.1, which runs on Debian 9.1
For a journal I have a 400GB Intel SSD drive and it seems CEPH created a
1GB journal:
Disk /dev/sdf: 372.6 GiB, 400088457216 bytes, 781422768
Hi all,
I’ve experienced a strange issue with my cluster.
The cluster is composed by 10 HDDs nodes with 20 nodes + 4 journal each plus 4
SSDs nodes with 5 SSDs each.
All the nodes are behind 3 monitors and 2 different crush maps.
All the cluster is on 10.2.7
About 20 days ago I started to notic
Hi Rudi,
On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> Hi,
>
> Can someone please tell me what the correct procedure is to upgrade a CEPH
> journal?
>
> I'm running ceph: 12.2.1 on Proxmox 5.1, which runs on Debian 9.1
>
> For a journal I have a 400GB Intel SSD drive and it seems C
Hi Alwin,
Thanx for the help.
I see now that I used the wrong wording in my email. I want to resize the
journal, not upgrade.
So, following your commands, I still sit with a 1GB journal:
oot@virt1:~# ceph-disk prepare --bluestore \
> --block.db /dev/sde --block.wal /dev/sde1 /dev/sda
Setting
Update: I noticed that there was a pg that remained scrubbing from the first
day I found the issue to when I reboot the node and problem disappeared.
Can this cause the behaviour I described before?
> Il giorno 09 nov 2017, alle ore 15:55, Matteo Dacrema ha
> scritto:
>
> Hi all,
>
> I’ve e
Rudi,
You can set the size of block.db and block.wal partitions in the ceph.conf
configuration file using:
bluestore_block_db_size = 16106127360 (which is 15GB, just calculate the
correct number for your needs)
bluestore_block_wal_size = 16106127360
Kind regards,
Caspar
2017-11-09 17:19 GMT+01
2017-11-09 17:02 GMT+01:00 Alwin Antreich :
> Hi Rudi,
> On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> > Hi,
> >
> > Can someone please tell me what the correct procedure is to upgrade a
> CEPH
> > journal?
> >
> > I'm running ceph: 12.2.1 on Proxmox 5.1, which runs on Debian 9.1
Please bear in mind that unless you've got a very good reason for separating
the WAL/DB into two partitions (i.e. you are testing/debugging and want to
observe their behaviour separately or they're actually going to go on different
devices which have different speeds) you should probably stick t
Hi Caspar,
Is this in the [global] or [osd] section of ceph.conf?
I am new to ceph so this is all still very vague to me.
What is the difference betwen the WAL and the DB?
And, lastly, if I want to setup the OSD in Proxmox beforehand and add the
journal to it, can I make these changes afterward
I installed rados-objclass-dev and objclass.h was installed successfully.
However, I failed to run the objclass following the steps as below:
1. copy https://github.com/ceph/ceph/blob/master/src/cls/sdk/cls_sdk.cc
into my machine. (cls_test.cpp)
2. make some changes on cls_test.cpp: 1) rename all
On Thu, Nov 9, 2017 at 10:05 AM, Zheyuan Chen wrote:
> I installed rados-objclass-dev and objclass.h was installed successfully.
> However, I failed to run the objclass following the steps as below:
>
> 1. copy https://github.com/ceph/ceph/blob/master/src/cls/sdk/cls_sdk.cc into
> my machine. (cls
I changed this line into CLS_LOG(0, "loading cls_test");
https://github.com/ceph/ceph/blob/master/src/cls/sdk/cls_sdk.cc#L120
I don't think the test object class is loaded correctly since I don't have
the loading information in the log.
However I can see "loading cls_sdk" in the osd log.
On Thu,
I would like store objects with
rados -p ec32 put test2G.img test2G.img
error putting ec32/test2G.img: (27) File too large
Changing the pool application from custom to rgw did not help
___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
Marc,
If you're running luminous you may need to increase osd_max_object_size.
This snippet is from the Luminous change log.
"The default maximum size for a single RADOS object has been reduced
from 100GB to 128MB. The 100GB limit was completely impractical in
practice while the 128MB limit
It should be noted that the general advise is to not use such large
objects since cluster performance will suffer, see also this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/021051.html
libradosstriper might be an option which will automatically break the
object into
In my cluster, rados bench shows about 1GB/s bandwidth. I've done some
tuning:
[osd]
osd op threads = 8
osd disk threads = 4
osd recovery max active = 7
I was hoping to get much better bandwidth. My network can handle it, and
my disks are pretty fast as well. Are there any major tunables I c
Hi everyone,
Beginner with Ceph, i’m looking for a way to do a 3-way replication between 2
datacenters as mention in ceph docs (but not describe).
My goal is to keep access to the data (at least read-only access) even when the
link between the 2 datacenters is cut and make sure at least one cop
I added an erasure k=3,m=2 coded pool on a 3 node test cluster and am
getting these errors.
pg 48.0 is stuck undersized for 23867.00, current state
active+undersized+degraded, last acting [9,13,2147483647,7,2147483647]
pg 48.1 is stuck undersized for 27479.944212, current state
ac
Yes, I actually changed it back to the default after reading somewhat
about it (https://github.com/ceph/ceph/pull/15520). I wanted to store
5GB and 12GB files, that makes recovery not to nice. I thought there was
a setting to split them up automatically like with rbd pools.
-Original M
Do you know of a rados client that uses this? Maybe a simple 'mount' so
I can cp the files on it?
-Original Message-
From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
Sent: donderdag 9 november 2017 22:01
To: Kevin Hrpcek
Cc: Marc Roos; ceph-users
Subject: Re: [ceph-use
38 matches
Mail list logo