Gilberto Mautner wrote:
> Hello list,
>
>
> I'm thinking about this topology:
>
> NFS Client zFS Host <---iSCSI---> zFS Node 1, 2, 3 etc.
>
> The idea here is to create a scalable NFS server by plugging in more
> nodes as more space is needed, striping data across them.
I see
Hello list,
I'm thinking about this topology:
NFS Client zFS Host <---iSCSI---> zFS Node 1, 2, 3 etc.
The idea here is to create a scalable NFS server by plugging in more nodes as
more space is needed, striping data across them.
A question is: we know from the docs that zFS
Peter Schuller wrote:
>>> >From what I read, one of the main things about ZFS is "Don't trust the
>>>
underlying hardware". If this is the case, could I run Solaris under
VirtualBox or under some other emulated environment and still get the
benefits of ZFS such as end to end
You could probably run MythTV in a linux domU within a Solaris system
(basically your same idea but virtualize the Linux instead of the
Solaris). The only hangup would be your TV tuner card(s). I use MythTV
with a separate Solaris file server but I've contemplated the
possibility of consolida
> >>From what I read, one of the main things about ZFS is "Don't trust the
> >> underlying hardware". If this is the case, could I run Solaris under
> >> VirtualBox or under some other emulated environment and still get the
> >> benefits of ZFS such as end to end data integrity?
>
> You could prob
Eric L. Frederich writes:
>>From what I read, one of the main things about ZFS is "Don't trust the
>>underlying hardware". If this is the case, could I run Solaris under
>>VirtualBox or under some other emulated environment and still get the
>>benefits of ZFS such as end to end data integrity
file system journals may support a variety of availability models, ranging from
simple support for fast recovery (return to consistency) with possible data
loss, to those that attempt to support synchronous write semantics with no data
loss on failure, along with fast recovery
the simpler models
>From what I read, one of the main things about ZFS is "Don't trust the
>underlying hardware". If this is the case, could I run Solaris under
>VirtualBox or under some other emulated environment and still get the benefits
>of ZFS such as end to end data integrity?
The reason I ask is that the
Are you rebooting without syncing the boot archive? Or have you tweaked
your boot archive such that /etc/zfs/zpool.cache isn't in
filelist.ramdisk?
- Eric
On Mon, Jan 07, 2008 at 11:20:26AM -0800, Andre Lue wrote:
> I have a slimmed down build on 61 and 72. None of these systems are
> automatic
On Mon, 7 Jan 2008, Andre Lue wrote:
> I usually have to do a zpool import -f pool to get it back.
What do you mean by 'usually'?
After the import, what's the output of 'zpool status'?
During reboot, are there any relevant messages in the console?
Regards,
markm
__
I have a slimmed down build on 61 and 72. None of these systems are
automatically remounting the zpool on a reboot.
zfs list returns "no datasets available"
zpool list returns "no pools available"
zfs mount -v -a runs but doesn't mount the filesystem. I usually have to do a
zpool import -f po
On Sun, Jan 06, 2008 at 08:05:56AM -0800, sudarshan sridhar wrote:
> My exact doubt is, if COW is default behavior of ZFS then does COWd
> data written to the same physical drive where the filesystem
> resides?
Just to clarify: there is no way to disable COW in ZFS.
> If so the physical
The problem is that the ZIL device is treated just like another toplevel
vdev. As part of the import process, we find all vdevs and assemble the
config, and verify that the sum of all vdev GUIDs match the expected
sum. Now, each vdev only stores enough configuration to keep track of
the toplevel
Marc Temkin wrote:
>
> Eric,
>
> ...
> ... In addition we are interested in getting a speaker on the Sun ZFS
> technology. If you know of any available speakers knowledgeable with
> ZFS please let me know or pass onto them my contact information.
>
>
> Thanks,
>
> Marc Temki
Perhaps this is being tracked as 6538021?
http://bugs.opensolaris.org/view_bug.do?bug_id=6538021
-- richard
Bill Moloney wrote:
> This is a re-post of this issue ... I didn't get any replies to the previous
> post of 12/27 ... I'm hoping someone is back from holiday
> who may have some insight in
parvez shaikh wrote:
> Hello,
>
> I am learning ZFS, its design and layout.
>
> I would like to understand how Intent logs are different from journal?
>
> Journal too are logs of updates to ensure consistency of file system
> over crashes. Purpose of intent log also appear to be same. I hope
Hello,
I am learning ZFS, its design and layout.
I would like to understand how Intent logs are different from journal?
Journal too are logs of updates to ensure consistency of file system over
crashes. Purpose of intent log also appear to be same. I hope I am not missing
something important
Bill Moloney wrote:
> Taking it out does not impact the immediate function of the pool,
> but the inability to re-import it after this event is a significant issue.
> Has
> anyone found a workaround for this problem ? I have data in a pool that
> I cannot import because the separate zil is no lon
This is a re-post of this issue ... I didn't get any replies to the previous
post of 12/27 ... I'm hoping someone is back from holiday
who may have some insight into this problem ... Bill
when I remove a separate zil disk from a pool, the pool continues to function,
logging synchronous writes to t
Hello Alex,
Monday, January 7, 2008, 11:59:42 AM, you wrote:
A> Hi, I had a question regarding a situation i have with my zfs pool
A> I have a zfs pool "ftp" and within it are 3 250gb drives in a raid
A> z and 2 400gb drives in a simple mirror. The pool itself has more
A> than 400gb free and I w
Hi, I had a question regarding a situation i have with my zfs pool
I have a zfs pool "ftp" and within it are 3 250gb drives in a raid z and 2
400gb drives in a simple mirror. The pool itself has more than 400gb free and I
would like to remove the 400gb drives from the server. My concern is how t
21 matches
Mail list logo