ttr("foo", "security.selinux", "system_u:object_r:fusefs_t:s0", 255)
> = 30
>
> -
> But I can assure it's only a single filesystem, and a single ceph-fuse
> client running.
&g
um 14:57 schrieb Ric Wheeler:
> > Is this move between directories on the same file system?
>
> It is, we only have a single CephFS in use. There's also only a single
> ceph-fuse client running.
>
> What's different, though, are different ACLs set for source and target
>
Is this move between directories on the same file system?
Rename as a system call only works within a file system.
The user space mv command becomes a copy when not the same file system.
Regards,
Ric
On Fri, May 25, 2018, 8:51 AM John Spray wrote:
> On Fri, May 25, 2018 at 1:10 PM, Oliver F
On 02/28/2018 10:06 AM, Max Cuttins wrote:
Il 28/02/2018 15:19, Jason Dillaman ha scritto:
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
wrote:
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
CentOS 7.5
(which is not available yet, it's still
I might have missed something in the question.
Fstrim does not free up space at the user level that you see with a normal
df.
It is meant to let the block device know about all of the space unused by
the file system.
Regards,
Ric
On Jan 29, 2018 11:56 AM, "Wido den Hollander" wrote:
>
>
> O
In any modern distribution, you should be fine.
Regards,
Ric
On Nov 23, 2017 9:55 AM, "Hüseyin ÇOTUK" wrote:
> Hello Everyone,
>
> We are considering to buy 4Kn block sized disks to use with Ceph. These
> disks report native 4kB blocks to OS rather than using 512-byte emulated
> 4kB sectors.
On 09/14/2017 11:17 AM, Ronny Aasen wrote:
On 14. sep. 2017 00:34, James Okken wrote:
Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer
would be as I was typing and thinking clearer about what I was asking. I just
was hoping CEPH would work like this since the openst
On 08/02/2016 07:26 PM, Ilya Dryomov wrote:
This seems to reflect the granularity (4194304), which matches the
>8192 pages (8192 x 512 = 4194304). However, there is no alignment
>value.
>
>Can discard_alignment be specified with RBD?
It's exported as a read-only sysfs attribute, just like
disca
On 03/29/2016 04:53 PM, Nick Fisk wrote:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Ric Wheeler
Sent: 29 March 2016 14:40
To: Nick Fisk ; 'Sage Weil'
Cc: ceph-users@lists.ceph.com; device-mapper development
Subject: Re: [
On 03/29/2016 04:35 PM, Nick Fisk wrote:
One thing I picked up on when looking at dm-cache for doing caching with
RBD's is that it wasn't really designed to be used as a writeback cache for
new writes, as in how you would expect a traditional writeback cache to
work. It seems all the policies are
On 03/29/2016 03:42 PM, Sage Weil wrote:
On Tue, 29 Mar 2016, Ric Wheeler wrote:
However, if the write cache would would be "flushed in-order" to Ceph
you would just lose x seconds of data and, hopefully, not have a
corrupted disk. That could be acceptable for some people. I was just
On 03/29/2016 01:35 PM, Van Leeuwen, Robert wrote:
If you try to look at the rbd device under dm-cache from another host, of course
any data that was cached on the dm-cache layer will be missing since the
dm-cache device itself is local to the host you wrote the data from originally.
And here it
On 03/29/2016 10:06 AM, Van Leeuwen, Robert wrote:
On 3/27/16, 9:59 AM, "Ric Wheeler" wrote:
On 03/16/2016 12:15 PM, Van Leeuwen, Robert wrote:
My understanding of how a writeback cache should work is that it should only
take a few seconds for writes to be streamed onto the n
vm as really easier than doing it under kvm, but I am a big
believer in the need for much better tools to help manage things like this so
that users don't see the complexity.
Ric
-Original Message-----
From: Ric Wheeler [mailto:rwhee...@redhat.com]
Sent: 27 March 2016 09:00
To: V
On 03/25/2016 02:00 PM, Jan Schermer wrote:
V5 is supposedly stable, but that only means it will be just as bad as any
other XFS.
I recommend avoiding XFS whenever possible. Ext4 works perfectly and I never
lost any data with it, even when it got corrupted, while XFS still likes to eat
the da
On 03/16/2016 12:15 PM, Van Leeuwen, Robert wrote:
My understanding of how a writeback cache should work is that it should only
take a few seconds for writes to be streamed onto the network and is focussed
on resolving the speed issue of small sync writes. The writes would be bundled
into larg
On 03/08/2016 08:09 PM, Jason Dillaman wrote:
librbd provides crash-consistent IO. It is still up to your application to
provide its own consistency by adding barriers (flushes) where necessary. If
you flush your IO, once that flush completes you are guaranteed that your
previous IO is safel
you please suggest me such a raid card?
Because we are in a verge of deciding on hardware raid or software raid to
use. Because our OpenStack cluster uses full SSD storage (local raid 10) and
my manager want to utilize hardware raid with SSD disks.
On Mon, Mar 7, 2016 at 10:04 AM, Ric Wheeler
sks if the raid configuration is raid 0 or raid 1.
On Mon, Mar 7, 2016 at 9:21 AM, Ric Wheeler <mailto:rwhee...@redhat.com>> wrote:
It is perfectly reasonable and common to use hardware RAID cards in
writeback mode under XFS (and under Ceph) if you configure them properly.
T
It is perfectly reasonable and common to use hardware RAID cards in writeback
mode under XFS (and under Ceph) if you configure them properly.
The key thing is that for writeback cache enabled, you need to make sure that
the S-ATA drives' write cache itself is disabled. Also make sure that yo
I am not sure why you want to layer a clustered file system (OCFS2) on top of
Ceph RBD. Seems like a huge overhead and a ton of complexity.
Better to use CephFS if you want Ceph at the bottom or to just use iSCSI luns
under ocfs2.
Regards,
Ric
On 01/04/2016 10:28 AM, Srinivasula Maram wr
On 05/05/2015 04:13 AM, Yujian Peng wrote:
Emmanuel Florac writes:
Le Mon, 4 May 2015 07:00:32 + (UTC)
Yujian Peng 126.com> écrivait:
I'm encountering a data disaster. I have a ceph cluster with 145 osd.
The data center had a power problem yesterday, and all of the ceph
nodes were down.
On 04/05/2015 11:22 AM, Nick Fisk wrote:
Hi Justin,
I'm doing iSCSI HA. Myself and several others have had troubles with LIO and
Ceph, so until the problems are fixed, I wouldn't recommend that approach.
But hopefully it will become the best solution in the future.
If you need iSCSI, currently
On 12/27/2014 02:32 AM, Lindsay Mathieson wrote:
I see a lot of people mount their xfs osd's with nobarrier for extra
performance, certainly it makes a huge difference to my small system.
However I don't do it as my understanding is this runs a risk of data
corruption in the event of power failu
On 10/15/2014 08:43 AM, Amon Ott wrote:
Am 14.10.2014 16:23, schrieb Sage Weil:
On Tue, 14 Oct 2014, Amon Ott wrote:
Am 13.10.2014 20:16, schrieb Sage Weil:
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
...
*
25 matches
Mail list logo