The links off the documentation page on the zfs open solaris site were
mysteriously pointing to the wrong subcommands on docs.sun.com. So if
you requested the man page for zfs, you actually got the man page for
zdump. Not cool :)
So I've gone through all the links and fixed them to point
On Mon, Jun 19, 2006 at 04:48:13PM -0600, Mark Shellenbaum wrote:
> grant beattie wrote:
> >On Mon, Jun 19, 2006 at 01:37:55PM +0200, Detlef Drewanz wrote:
> >
> >>Hi,
> >>moving from ufs to zfs ufsdump-on-ufs --> ufsrestore within
> >>zfs is possible to run. I also tried it and it worked for my
ZFS engineering got back to us today and said the following:
In addition to 6404018 there are couple of other performance bottle necks :
6413510 zfs: writing to ZFS filesystem slows down fsync() on other files
in the same FS
6429205 each zpool needs to monitor it's throughput and throttle hea
ERj> 2) is it possible to easily add (-> more available space) and
> you can add disks to a raidz pool but it won't actually grow stripe
> width and in order to preserver redundancy you will have to add at
> least pairs of disks.
if one is drive bay limited, replace *all* the raidz drives,
one
Hello Ernst,
Tuesday, June 20, 2006, 12:32:55 AM, you wrote:
ERj> Hello ZFS forum,
ERj> I'm curious about ZFS and read a bit about it, though still have a few
open questions.
ERj> I'd like to have both ...
ERj> - a pooling of harddisk space (like LVM in Linux) and
ERj> - integrated data red
grant beattie wrote:
On Mon, Jun 19, 2006 at 01:37:55PM +0200, Detlef Drewanz wrote:
Hi,
moving from ufs to zfs ufsdump-on-ufs --> ufsrestore within
zfs is possible to run. I also tried it and it worked for my
2.6 GB Home directory very well. Does anyone see any issues ?
ufsdump can't write
Hello ZFS forum,
I'm curious about ZFS and read a bit about it, though still have a few open
questions.
I'd like to have both ...
- a pooling of harddisk space (like LVM in Linux) and
- integrated data redundancy & possibly speedups (something like RAID-5)
RAID-Z seems to provide that, if I'
On Mon, Jun 19, 2006 at 01:37:55PM +0200, Detlef Drewanz wrote:
> Hi,
> moving from ufs to zfs ufsdump-on-ufs --> ufsrestore within
> zfs is possible to run. I also tried it and it worked for my
> 2.6 GB Home directory very well. Does anyone see any issues ?
ufsdump can't write ACLs to ZFS yet.
So, if I recall from this list, a mid-june release to the web was
expected for S10U2. I'm about to do some final production testing, and
I was wondering if S10U2 was near term or more of a July thing now.
This may not be the perfect venue for the question, but the subject
was previously covered wi
I'm pretty sure this is my fault but I need some help in fixing the system.
It was installed at one point with snv_29 with the pre integration
SUNWzfs package. I did a live upgrade to snv_42 but forgot to remove
the old SUNWzfs before I did so. When the system booted up got
complaints about
Robert Milkowski schrieb:
Hello UNIX,
Monday, June 19, 2006, 10:02:03 AM, you wrote:
Ua> Simple question: is it safe to enable the disk write cache when using ZFS?
As ZFS should send proper ioctl to flush cache after each transaction
group it should be safe.
Actually if you give ZFS whole dis
Detlef Drewanz wrote:
Hi,
moving from ufs to zfs ufsdump-on-ufs --> ufsrestore within zfs is
possible to run. I also tried it and it worked for my 2.6 GB Home
directory very well. Does anyone see any issues ?
Ufsrestore uses the standard unix/posix API when restoring files, and
is fs-agnost
Eric Schrock wrote:
Simply because we erred on the side of caution. The fewer metachacters,
the better. It's easy to change if there's enough interest.
Seems reasonable.
In my case it actually saved me having to remember to go and do zfs
destroy cube/projects/lost+found :-) but I didn't thi
Simply because we erred on the side of caution. The fewer metachacters,
the better. It's easy to change if there's enough interest.
- Eric
On Mon, Jun 19, 2006 at 04:01:01PM +0100, Darren J Moffat wrote:
> I accidentally tried to create a ZFS file system called lost+found[1]
> and zfs(1) told
I accidentally tried to create a ZFS file system called lost+found[1]
and zfs(1) told me that + was an invalid char for a filesystem name.
Why is that ?
[1] cd /export/projects (where that is a ufs file system)
for i in * ; do
zfs create cube/projects/$i
done
--
Darren J Moffa
Hi,
moving from ufs to zfs ufsdump-on-ufs --> ufsrestore within
zfs is possible to run. I also tried it and it worked for my
2.6 GB Home directory very well. Does anyone see any issues ?
--
Detlef
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
Robert Milkowski writes:
> Hi.
>
>All filesystems have compression set to off.
>
>
> bash-3.00# zfs list -o compression|grep -i on
> bash-3.00#
>
> But still lzjb_compress() is ised by ZFS - is it for metadata or what?
>
Yes, for metadata.
-r
_
> As ZFS should send proper ioctl to flush cache after
> each transaction
> group it should be safe.
>
> Actually if you give ZFS whole disk it will try to
> enable write cache
> on that disk anyway.
Thanks for the answer, that's very good news indeed!
This message posted from opensolaris.org
Hello UNIX,
Monday, June 19, 2006, 10:02:03 AM, you wrote:
Ua> Simple question: is it safe to enable the disk write cache when using ZFS?
As ZFS should send proper ioctl to flush cache after each transaction
group it should be safe.
Actually if you give ZFS whole disk it will try to enable writ
Simple question: is it safe to enable the disk write cache when using ZFS?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
20 matches
Mail list logo