Hi Robert,
On 1/14/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
I did 'zfs umount -a' in a global zone and all (non busy) datasets
also in local zone were unmounted (one dataset was delegated to the
local zone and other datasets were created inside). Well, I belive it
shouldn't be
th
On 1/15/07, mike <[EMAIL PROTECTED]> wrote:
1) Is a hardware-based RAID behind the scenes needed? Can ZFS safely
be considered a replacement for that? I assume that anything below the
filesystem level in regards to redundancy could be an added bonus, but
is it necessary at all?
ZFS is more reli
Not to be a ninny, but a one-write/many read is pretty much the design
scenario for using CacheFS in conjunction with NFS on the client side.
That is, assuming that only one client is doing the writing. If all your
clients are doing writing (just maybe not to the same file), then you
DON'T hav
On Jan 14, 2007, at 21:37, Wee Yeh Tan wrote:
On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Mike Papper wrote:
>
> The alternative I am considering is to have a single filesystem
> available to many clients using a SAN (iSCSI in this case). However
> only one client would mount the ZFS
1) Is a hardware-based RAID behind the scenes needed? Can ZFS safely
be considered a replacement for that? I assume that anything below the
filesystem level in regards to redundancy could be an added bonus, but
is it necessary at all?
2) I am looking into building a 10-drive system using 750GB or
On Sun, Jan 14, 2007 at 07:06:20PM -0800, Mike Papper wrote:
> Thanks for the feedback, it seems caching is the main concern and if I
> always only write any given file once (then perhaps do a flush and a
> close after the write to empty the cache) and from then only ever read
> the file, will t
Thanks for the feedback, it seems caching is the main concern and if I
always only write any given file once (then perhaps do a flush and a
close after the write to empty the cache) and from then only ever read
the file, will the scheme I had in mind work?
Also, will ZFS really prevent my moun
On 1/13/07, Richard Elling <[EMAIL PROTECTED]> wrote:
And a third choice is cutting your 40GByte drives in two such that you
have a total of 6x 20 GByte partitions spread across your 80 and 40 GByte
drives. Then install three 2-way mirrors across the disks. Some people
like such things, and the
thanks all for the feedback! i definitely learned a lot-- storage isn't
anywhere near my field of expertise, so it's great to get some real examples to
go with all the buzzwords you hear around the watercooler. ;)
i'll probably give one of the raid-z or mirroring setups suggested a try when i
On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
Mike Papper wrote:
>
> The alternative I am considering is to have a single filesystem
> available to many clients using a SAN (iSCSI in this case). However
> only one client would mount the ZFS filesystem as read/write while the
> others woul
Mike Papper wrote:
The alternative I am considering is to have a single filesystem
available to many clients using a SAN (iSCSI in this case). However
only one client would mount the ZFS filesystem as read/write while the
others would mount it read-only. For my application, all files are
wri
Hi, I am considering using ZFS so that multiple clients can "share" the
same filesystem. I know that ZFS does not support a distributed filesystem.
The alternative I am considering is to have a single filesystem
available to many clients using a SAN (iSCSI in this case). However only
one clien
After having read that, I have to say Bravo to that team. It really sounds like
they are doing a great job.
This raises the question of when will the SATA framework be available for
testing?
-brian
-Original Message-
From: "Richard Elling" <[EMAIL PROTECTED]>
To: zfs-discuss@opensolari
13 matches
Mail list logo