On Tue, Jan 19, 2016 at 12:36 PM, Gregory Farnum wrote:
> > I've found that using knfsd does not preserve cephfs directory and file
> > layouts, but using nfs-ganesha does. I'm currently using nfs-ganesha
> 2.4dev5
> > and seems stable so far.
>
> Can you expand on that? In what manner is it not
But interestingly enough, if you look down to where they run the targetcli ls,
it shows a RBD backing store.
Maybe it's using the krbd driver to actually do the Ceph side of the
communication, but lio plugs into this rather than just talking to a dumb block
device???
This needs further investi
On Tue, Jan 19, 2016 at 10:34 AM, Nick Fisk wrote:
> But interestingly enough, if you look down to where they run the targetcli
> ls, it shows a RBD backing store.
>
> Maybe it's using the krbd driver to actually do the Ceph side of the
> communication, but lio plugs into this rather than just t
Hi,
I would like to know if CephFS will be marked as stable as soon as Ceph-1.0
will be released.
Regards - Willi___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
So is it a different approach that was used here by Mike Christie:
http://www.spinics.net/lists/target-devel/msg10330.html ?
It seems to be a confusion because it also implements target_core_rbd
module. Or not?
2016-01-19 18:01 GMT+08:00 Ilya Dryomov :
> On Tue, Jan 19, 2016 at 10:34 AM, Nick Fisk
It sounds like your just assuming these drives don't perform good...
- Original Message -
From: "Mark Nelson"
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 2:17:19 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
Take Greg's comments to heart, because he's
Not at all! For all we know, the drives may be the fastest ones on the
planet. My comment still stands though. Be skeptical of any one
benchmark that shows something unexpected. 90% of native SSD/NVMe IOPS
performance in a distributed storage system is just such a number. Look
for test rep
I was wondering if there is anything else I could provide outside of the
radosgw logs during the file upload? At this point I am uploading 7mb
files repeatedly to try to reproduce this error but so far I do not have
any missing keys in my test bucket. I can't see this happening if we
were to ju
Everyone is right - sort of :)
It is that target_core_rbd module that I made that was rejected
upstream, along with modifications from SUSE which added persistent
reservations support. I also made some modifications to rbd so
target_core_rbd and krbd could share code. target_core_rbd uses rbd like
On Fri, Jan 15, 2016 at 5:04 PM, seapasu...@uchicago.edu
wrote:
> I have looked all over and I do not see any explicit mention of
> "NWS_NEXRAD_NXL2DP_PAKC_2015010111_20150101115959" in the logs nor do I
> see a timestamp from November 4th although I do see log rotations dating
> back to octob
Hi,
someone asked me if he could get access to the BTRFS defragmenter we
used for our Ceph OSDs. I took a few minutes to put together a small
github repository with :
- the defragmenter I've been asked about (tested on 7200 rpm drives and
designed to put low IO load on them),
- the scrub scheduler
Hey guys, I am having an s3 upload to ceph issue where in the upload seems to
crawl after the first few chunks for a multipart upload. The test file is 38M
in size and the upload was tried with s3 default chunk size at 15M and then
tried again with chunk size set to 5M and then was tested again
Hey Kobi,
You stated:
> >> You can add:
> >> *access_log_file=/var/log/civetweb/access.log
> >> error_log_file=/var/log/civetweb/error.log*
> >>
> >> to *rgw frontends* in ceph.conf though these logs are thin on info
> >> (Source IP, date, and request)
How is this done exactly in the config fi
Of course, i figured this out. You meant just append it to the frontends
setting. Very confusing as it's unlike every other ceph setting.
rgw frontends = civetweb num_threads=150
error_log_file=/var/log/radosgw/civetweb.error.log
access_log_file=/var/log/radosgw/civetweb.access.log
Any documentat
14 matches
Mail list logo