Hi,
Is it identical?
In the places we use sync=disabled (e.g. analysis scratch areas),
we're totally content with losing last x seconds/minutes of writes,
and understood that on-disk consistency is not impacted.
Cheers,Dan
On Mon, Nov 12, 2018 at 3:16 PM Kevin Olbrich wrote:
>
> Hi Dan,
>
> ZFS w
Yes, the access VM layer is there because of multi-tenancy - we need to
provide parts of the storage into different private environments (can be
potentially on private IP addresses). And we need both - NFS as well as
CIFS.
On Mon, Nov 12, 2018 at 3:54 PM Ashley Merrick
wrote:
> Does your use cas
Does your use case mean you need something like nfs/cifs and can’t use
CephFS mount directly?
Has been quite a few advances in that area with quotas and user management
in recent versions.
But obviously all depends on your use case at client end.
On Mon, 12 Nov 2018 at 10:51 PM, Premysl Kouril
Some kind of single point will always be there I guess. Because even if we
go with the distributed filesystem, it will be mounted to the access VM and
this access VM will be providing NFS/CIFS protocol access. So this machine
is single point of failure (indeed we would be running two of them for
ac
My 2 cents would be depends how H/A you need.
Going with the monster VM you have a single point of failure and a single
point of network congestion.
If you go the CephFS route you remove that single point of failure if you
mount to clients directly. And also can remove that single point of networ
Hi Kevin,
I should have also said, that we are internally inclined towards the
"monster VM" approach due to seemingly simpler architecture (data
distribution on block layer rather than on file system layer). So my
original question is more about comparing the two approaches (distribution
on block
Hi Dan,
ZFS without sync would be very much identical to ext2/ext4 without journals
or XFS with barriers disabled.
The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high
risk (using ext4 with kvm-mode unsafe would be similar I think).
Also, ZFS only works as expected with schedu
We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
It's very stable and if your use-case permits you can set zfs
sync=disabled to get very fast write performance that's tough to beat.
But if you're building something new today and have *only* the NAS
use-case then it would make b
Hi!
ZFS won't play nice on ceph. Best would be to mount CephFS directly with
the ceph-fuse driver on the endpoint.
If you definitely want to put a storage gateway between the data and the
compute nodes, then go with nfs-ganesha which can export CephFS directly
without local ("proxy") mount.
I had