On Wed, Jun 27, 2018 at 1:21 AM Kamble, Nitin A
wrote:
>
> I tried enabling the RDMA support in Ceph Luminous release following this [1]
> guide.
>
> I used the released Luminous bits, and not the Mellanox branches mentioned in
> the guide.
>
>
>
> I could see some RDMA traffic in the perf count
Hi Richard,
It is an interesting test for me too as I am planning to migrate to Bluestore
storage and was considering repurposing the ssd disks that we currently use for
journals.
I was wondering if you are using the Filestore or the bluestone for the osds?
Also, when you perform your testing,
Jason Dillaman wrote:
Conceptually, I would assume it should just work if configured
correctly w/ multipath (to properly configure the ALUA settings on the
LUNs). I don't run FreeBSD, but any particular issue you are seeing?
When logged in to both targets, the following message floods the log
On Thu, Jun 28, 2018 at 10:30 AM Yu Haiyang wrote:
>
> Hi Yan,
>
> Thanks for your suggestion.
> No, I didn’t run fio on ceph-fuse. I mounted my Ceph FS in kernel mode.
>
command option of fio ?
> Regards,
> Haiyang
>
> > On Jun 27, 2018, at 9:45 PM, Yan, Zheng wrote:
> >
> > On Wed, Jun 27, 20
Here you go. Below are the fio job options and the results.
blocksize=4K
size=500MB
directory=[ceph_fs_mount_directory]
ioengine=libaio
iodepth=64
direct=1
runtime=60
time_based
group_reporting
numjobs Ceph FS Erasure Coding (k=2, m=1) Ceph FS 3 Replica
1 job 577KB/s 765KB/s
2 job 1.27M
Hi,
I'm trying to setup iSCSI using ceph-ansible (stable-3.1) branch with
one iscsi gateway. The Host is a CentOS 7 host and I use the packages
from the Centos Storage SIG for luminous. Additionally I have installed:
ceph-iscsi-cli Version 2.7 Release 13.gb9e48a7.el7
ceph-iscsi-config Version
To me (in all my untrained FreeBSD knowledge), it just sounds like the
multipath layer isn't configuring the ALUA on the devices. If the multipath
layer doesn't activate one of the paths, I would expect all IO to fail.
Under Linux, the multipath layer has to be configured to enable ALUA for
LIO pat
You should have "/var/log/ansible-module-igw_config.log" on the target
machine that hopefully includes more information about why the RBD image is
missing from the TCM backend. In the past, I've seen issues w/ image
features and image size mismatch causing the process to abort.
On Thu, Jun 28, 201
Hi Vlad,
Have not thoroughly tested my setup but so far things look good. Only
problem is that I have to manually activate the osd's using the ceph-deploy
command. Manually mounting the osd partition doesnt work.
Thanks for replying.
Regards,
Rahul S
On 27 June 2018 at 14:15, Дробышевский, Влад
Hi all,
We started running a EC pool based object store, set up with a 4+2
configuration, and we seem to be getting an almost constant report of
inconsistent PGs during scrub operations. For example:
root@rook-tools:/# ceph pg ls inconsistent
PG_STAT OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACE
Hi Jason,
Am 28.06.2018 um 14:33 schrieb Jason Dillaman:
You should have "/var/log/ansible-module-igw_config.log" on the target
machine that hopefully includes more information about why the RBD image
is missing from the TCM backend.
I have the logfile, however there everything seems fine. The
Do you have the ansible backtrace from the "ceph-iscsi-gw : igw_lun |
configure luns (create/map rbds and add to lio)]" step? Have you tried
using the stock v4.16 kernel (no need to use the one on shaman)?
On Thu, Jun 28, 2018 at 11:29 AM Bernhard Dick wrote:
> Hi Jason,
>
> Am 28.06.2018 um 14:
On 6/28/18, 12:11 AM, "kefu chai" wrote:
> What is the state of the RDMA code in the Ceph Luminous and later
releases?
in Ceph, the RDMA support has been constantly worked on. xio messenger
support was added 4 years ago, but i don't think it's maintained
anymore. and async
Hey Cephers!
The Ceph Tech Talk of June starts in about 50 minutes and on this
edition George Mihaiuescu will talk about Ceph used on Cancer Research
at OIRC.
Please check the URL below for the meeting details:
https://ceph.com/ceph-tech-talks/
Kindest regards,
Leo
--
Leonardo Vaz
Ceph Com
Hi Brad,
This has helped to repair the issue. Many thanks for your help on this!!!
I had so many objects with broken omap checksum, that I spent at least a few
hours identifying those and using the commands you've listed to repair. They
were all related to one pool called .rgw.buckets.index . A
Hello,
Recently we've observed on one of our ceph clusters that uploading of a large
number of small files(~2000x2k) fails. The http return code shows 200 but the
file upload fails. Here is an e.g of the log
2018-06-27 07:34:40.624103 7f0dc67cc700 1 == starting new request
req=0x7f0dc67c68
Are you running tight on RAM?
You might be running into http://tracker.ceph.com/issues/22464
Paul
2018-06-28 17:17 GMT+02:00 Bryan Banister :
> Hi all,
>
>
>
> We started running a EC pool based object store, set up with a 4+2
> configuration, and we seem to be getting an almost constant report
You can access offline OSD using ceph-objectstore-tool which allows to
enumerate and access specific objects.
Not sure this makes sense for any purposes other than low-level
debugging though..
Thanks,
Igor
On 6/28/2018 5:42 AM, Yu Haiyang wrote:
Hi All,
Previously I read this article a
I think the second variant is what you need. But I'm not the guru in
ceph-deploy so there might be some nuances there...
Anyway the general idea is to have just a single NVME partition (for
both WAL and DB) per OSD.
Thanks,
Igor
On 6/27/2018 11:28 PM, Pardhiv Karri wrote:
Thank you Igor f
I'm also not 100% sure but I think that the first one is the right way
to go. The second command only specifies the db partition but no
dedicated WAL partition. The first one should do the trick.
On 28.06.2018 22:58, Igor Fedotov wrote:
>
> I think the second variant is what you need. But I'm not
I'm going to hope that Igor is correct since I have a PR for DeepSea to change
this exact behavior.
With respect to ceph-deploy, if you specify --block-wal, your OSD will have a
block.wal symlink. Likewise, --block-db will give you a block.db symlink.
If you have both on the command line, you
On 28.06.2018 23:25, Eric Jackson wrote:
> Recently, I learned that this is not necessary when both are on the same
> device. The wal for the Bluestore OSD will use the db device when set to 0.
That's good to know. Thanks for the input on this Eric.
--
SUSE Linux GmbH, GF: Felix Imendörffer, Ja
The idea is to avoid separate WAL partition - it doesn't make sense for
single NVMe device - just compicates things.
And if you don't specify WAL explicitly it's co-exist with DB.
Hence I vote for the second option :)
On 6/29/2018 12:07 AM, Kai Wagner wrote:
I'm also not 100% sure but I thi
On Fri, Jun 29, 2018 at 2:38 AM, Andrei Mikhailovsky wrote:
> Hi Brad,
>
> This has helped to repair the issue. Many thanks for your help on this!!!
No problem.
>
> I had so many objects with broken omap checksum, that I spent at least a few
> hours identifying those and using the commands you'
Hi Andrei,
These are good questions. We have another cluster with filestore and
bcache but for this particular one I was interested in testing out
bluestore. So I have used bluestore both with and without bcache.
For my synthetic load on the vm's I'm using this fio command:
fio --randrepeat=1 --ioe
The ceph-objectstore-tool also has an (experimental?) mode to mount the OSD
store as a FUSE Filesystems regardless of the backend. But I have to assume
what you’re really after here is repairing individual objects, and the way
that works is enough different in BlueStore I really wouldn’t worry abou
On Fri, Jun 29, 2018 at 10:01 AM Yu Haiyang wrote:
>
> Ubuntu 16.04.3 LTS
>
4.4 kernel? AIO on cephfs is not supported by 4.4 kernel, AIO
actually is synchronized IO. 4.5 kernel is the first version that
support AIO on cephfs
> On Jun 28, 2018, at 9:00 PM, Yan, Zheng wrote:
>
> kernel version
You may find my talk at OpenStack Boston’s Ceph day last year to be useful:
https://www.youtube.com/watch?v=rY0OWtllkn8
-Greg
On Wed, Jun 27, 2018 at 9:06 AM Marc Schöchlin wrote:
> Hello list,
>
> i currently hold 3 snapshots per rbd image for my virtual systems.
>
> What i miss in the current d
On Wed, Jun 27, 2018 at 2:32 AM Nicolas Dandrimont <
ol...@softwareheritage.org> wrote:
> Hi,
>
> I would like to use ceph to store a lot of small objects. Our current usage
> pattern is 4.5 billion unique objects, ranging from 0 to 100MB, with a
> median
> size of 3-4kB. Overall, that's around 35
For RGW, compression works very well. We use rgw to store crash dumps, in
most cases, the compression ratio is about 2.0 ~ 4.0.
I tried to enable compression for cephfs data pool:
# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compressio
Seems there's no plan for that and the vmware kernel documentation will only
share to partners. You would better off to use iscsi. By the way, i found that
the performance is much better for SCST than ceph-iscsi. I don't think
ceph-iscsi is production-ready?
Regards,
Horace Ng
From: "Steve
You need 1 core per SATA disk, otherwise your load average will be skyrocketed
when your system is at full load and render the cluster unstable, i.e. ceph-mon
unreachable, slow requests, etc.
Regards,
Horace Ng
- Original Message -
From: "Brian :"
To: "Wladimir Mutel" , "ceph-users"
Hi,
I want to play around with my ceph.file.layout attributes such as stripe_unit
and object_size to see how it affected my Ceph FS performance.
However I’ve been unable to set any attribute with below error.
$ setfattr -n ceph.file.layout.stripe_unit -v 41943040 file1
setfattr: file1: Invalid a
Rahul,
if you are using the whole drives for OSDs then ceph-deploy is a good
option in most cases.
2018-06-28 18:12 GMT+05:00 Rahul S :
> Hi Vlad,
>
> Have not thoroughly tested my setup but so far things look good. Only
> problem is that I have to manually activate the osd's using the ceph-de
I'm using compression on a cephfs-data pool in luminous. I didn't do
anything special
$ sudo ceph osd pool get cephfs-data all | grep ^compression
compression_mode: aggressive
compression_algorithm: zlib
You can check how much compression you're getting on the osd's
$ for osd in `seq 0 11`; do ec
Oh, also because the compression is at the osd level you don't see it
in ceph df. You just see that your RAW is not increasing as much as
you'd expect. E.g.
$ sudo ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
785T 300T 485T 61.73
POOLS:
NAME
36 matches
Mail list logo